Fundamentals of artificial intelligence

Fundamentals of Artificial Intelligence (AI) is the science and engineering of making intelligent machines that are equipped with human-like capacities. AI seeks to replicate and enhance the abilities of humans in terms of recognizing patterns, processing natural language and making decisions. AI has the potential to swiftly analyze vast amounts of data and extract valuable insights in a way that humans could not do as efficiently.

During the last few months artificial intelligence has become a recurring topic of information-conversation.. First it was for generative image models, such as DALL-E 2, Midjourney Stable Diffusion, and the like, then for the chatbots such as ChatGPT, the new Bing, Poe and the expected Bard, and more recently for its leap into productivity solutions, with announcements of AI integration in Google’s Gmail and Docs, and in Microsoft 365 with Copilot.

Still unclear how much of a bubble there is in all that is going on, if any.We are seeing more and more applications and services adopting artificial intelligence for a wide range of functions every day. It is possible that, after this peak, we will see this rate of implementation decline, and even that some of those who are adopting it now, more for the sake of riding the wave than for real utility, will end up backtracking.

However, the potential of artificial intelligence in many of the activities in which it has already begun to gain positions is, quite simply, indisputable. Chatbots with the ability to interpret natural language can substantially facilitate the search for information on the Internet, the creation of content, data analysis, process optimization… the list is long and classy, and although those responsible for the models still have to solve some major problems, such as hallucinations, important advances are constantly being made in this area.

Put another way, artificial intelligence is here to stayand in the coming months and years we can expect many advances, both in its reliability and in its arrival in new fields, some of which are already imaginable today and others that will probably surprise us. The pace of its implementation will probably not be as fast as the one we are experiencing these months, but it will be constant and much more planned than at present.

Thus, in view of this new reality that is already beginning to be present, in addition to keeping abreast of the latest developments we also it is important to be clear about some concepts intrinsically associated with artificial intelligence. These are terms that we hear on a regular basis, but whose meaning is not always clear to us, and what better time than now to review and define them? This way, when you hear any of them again, you will be much clearer about what they refer to.

Let’s go then, without further ado, to review the basics of artificial intelligence, addressing the fundamental concepts of artificial intelligence.

Artificial intelligence

Artificial intelligence (AI) is a branch of computer science dedicated to the design and development of systems capable of performing tasks that, until recently, could only be performed by humans. This discipline focuses on the creation of algorithms and programs that allow models to process information intelligently, learn from experience and adapt to the context in order to provide the same or better answers (which can be of any kind) than those that a human being would give.

Fundamentals of artificial intelligence, everything you need to know to understand it.


A set of data organized in a structure that allows its analysis and processing by machine learning algorithms. Datasets are the fundamental raw material for the training of AI models, since they are fed with data to learn and improve their performance in the tasks for which they have been designed.

Datasets can be of different shapes and sizes, depending on the type of problem to be solved. For example, image processing uses datasets of images labeled with information about their content, while shopping pattern detection can use unlabeled data obtained from social networks, customer records, and so on.

There are several points to consider when creating or choosing a dataset, as the quality of the dataset will determine the reliability and quality of the output. These are the most important ones:

  • CompleteA dataset must contain all the data necessary to train the model effectively. This means that it must be large enough and cover all the cases relevant to the problem you are trying to solve.
  • RepresentativeA dataset must be representative of the problem it is trying to solve. That is, it must reflect the characteristics and variability of the data in the real world so that the model can learn effectively.
  • Tagged (for supervised and semi-supervised learning): the data in a dataset must be correctly labeled for the model to learn effectively, labeling consists of associating each input data with a label that classifies it into a specific category or class.
  • Error-free.A dataset must be free of errors and redundancies in order for artificial intelligence models to learn effectively. Data with errors can generate inaccurate results and affect the quality of the model and its responses.
  • Respectful of privacy and intellectual property.The data that make up the dataset must be respectful of privacy and not use copyrighted content of any kind. Therefore, it is very important that the data be collected ethically and used responsibly.
  • NewsIn many cases a dataset should be as up to date as possible, especially if it involves data that change over time, such as weather, news, product prices, etc. An outdated dataset can generate inaccurate results and limit the effectiveness of the model. An outdated dataset can generate inaccurate results and limit the effectiveness of the model.
  • Varied: should include a variety of data and cases so that the model can learn to recognize relevant patterns and features in different contexts and situations. Lack of diversity in the learning process can lead to bias, inaccuracy, and failure to detect patterns and results that depart from the elements used in training.
  • BalancedA dataset should be balanced in terms of the amount of data belonging to each class or category. If there is a significant disproportion between classes, the model may be biased towards the classes with higher representation, which would affect its ability to generalize and make accurate predictions.
  • QualityThe data in a dataset must be of sufficient quality for the model to learn effectively. This includes the accuracy of the measurements, the resolution of the images, the clarity of the text, among other aspects.
  • InterpretabilityA dataset must be interpretable and easy to understand for human beings, especially for experts in the problem to be solved. This will allow validating the model results and detecting possible errors or biases.

As you can see, a good dataset is a must for a model to be trained and validated correctly. The good news is that you can find many public repositories from which you can download and use them if you decide to take your first steps in the world of artificial intelligence. To give you a reference, I personally like very much the repository of the Donald Bren School of Information and Computer Sciences of the University of California, which has recently started testing a new design, which makes it much easier to search for datasets. You can find it at this link.

Fundamentals of artificial intelligence, everything you need to know to understand it.

Neural networks

Neural networks are an artificial intelligence technique that mimics how the human brain works to process information. They are based on a set of interconnected algorithms that mimic the behavior of neurons in the brain to perform complex tasks such as pattern recognition, data classification and decision making.

Neural networks are composed of layers of interconnected nodes, which process information and transmit it to the next layer. Each node has an activation function, which determines the node’s output based on its input and the weights of the connections it has with other nodes. During training, the network automatically adjusts the weights and connections between nodes to minimize the error in the prediction of the results.

Machine Learning

Machine learning, also known as machine learning, is a technique within the field of artificial intelligence that allows computers to learn through experience. Instead of explicitly programming a solution to a problem, machine learning algorithms are able to analyze data and learn patterns in order to make decisions and perform specific tasks. There are three main types of machine learning algorithms: supervised learning, unsupervised learning and reinforcement learning, which we will discuss in detail later.

Machine learning is applied in a wide variety of fields, from entertainment and advertising to medicine and security. For example, speech and image recognition, fraud detection and product recommendation are common applications of this technology. Machine learning is also used to create chatbots, virtual assistants and other artificial intelligence systems that can improve the efficiency and productivity of companies and organizations.

Deep Learning

Deep Learning is a technique within the field of machine learning that uses deep neural networks to process large amounts of data and extract complex patterns and high-level features. Unlike conventional machine learning, Deep Learning is able to learn autonomously through multiple layers of processing, leading to more accurate and sophisticated results.

The architecture of a deep neural network is composed of multiple processing layers, which allow the algorithm to learn hierarchically from the input data. Each layer processes the information and transmits it to the next layer, and so on, until the output layer is reached. During training, the network automatically adjusts the weights and connections between neurons to minimize the error in predicting the results. So, as you can deduce, the multiple layers and the ability to balance the output of each layer provides a much more accurate output, if both the algorithm and the training process are appropriate.

Deep learning is used in a wide variety of applications, from speech recognition to image and video processing to machine translation and medical diagnostics to autonomous driving systems. However, a particularly well-recognized example of the use of Deep Learning can be found in DLSS, NVIDIA’s intelligent superscaling system.

Fundamentals of artificial intelligence, everything you need to know to understand it.

Supervised and unsupervised learning

As you may have already deduced or know, learning is nothing more than the process in which we feed the artificial intelligence algorithm with data that it will use to learn. We can be talking about texts, images, sounds, documents in multiple formats… This will depend, of course, on the function we want the model to fulfill once it has been trained. And it will also depend on this that we opt for one or another learning modality.

In supervised learning, all the elements we use for this process are labeled, i.e., we let the algorithm know what they contain. For example (and it will not be the last time we use this particular example, I assure you), imagine that you want to create an artificial intelligence model that knows how to distinguish whether the animal shown in an image is a cat or a dog. In such a case, in the training we will use pictures of cats and pictures of dogs, telling the algorithm to which category each of the images belongs.

In unsupervised learning… indeed, you have deduced it, we train the AI with the data, but we do not label it. why? Because in this case we want the model to be able to detect hidden patterns in the data. In this way, it will be the AI that analyzes the content of the dataset in search of data that it can relate to each other and the potential implications of these relationships. A common example of unsupervised learning can be found in the learning of large datasets of social or shopping profiles, which can yield surprising results related to potential purchases of a certain type of product or consumption of a particular type of content.

There is, in case you were wondering, the semi-supervised learning modality, in which the model is fed with a dataset of labeled data, but in addition analyzes the same in search of and captures patterns based on data present in the same but unlabeled data. This type of model combines the best of the two models on which it is based, although it can also reproduce the problems of both, as well as being more complex to develop.

Reinforcement learning

It is a machine learning technique in which the model learns to make decisions by interacting with its environment and receiving feedback from the environment, which will reward it for successful responses. Reinforcement learning is inspired by the way humans, both in their own learning and also when educating other life forms, learn through experience and trial and error by means of a reward system.

In practice, reinforcement learning is used in a wide variety of applications, from robotics and industrial automation to games and simulations. For example, a reinforcement learning model can be trained to play chess, gradually improving its skill through experience and gain in each game. It has also been used in industrial process optimization, where a model learns to maximize efficiency and minimize costs in a complex production environment.

Fundamentals of artificial intelligence, everything you need to know to understand it.

Natural language processing

This is one of the most important disciplines for the popularization of artificial intelligence, since it is responsible for enabling models to receive prompts written by users and to understand what is indicated in them. To a greater or lesser extent, we have always adapted (or tried to adapt) our language to that of the machine. Whether with programming languages or with specially synthesized phrases to optimize search results, it is the human being who tries to formulate his queries in the clearest possible way for the system or service he is using.

Natural language processing walks in the opposite direction, making it the system that is able to interpret human language, with its many turns of phrase, variants, expressions and even errors and misspellings, to thereby be able to know how to give you an answer. “Talking” with a chatbot is already a similar experience, due to the use of the language we use, to doing it with another person, as a result of the advances in this area.

Machine vision

Since you are more context-sensitive than the vast majority of AIs (and that they have evolved a lot in this regard), surely after reading the above explanation you already imagine what we are going to talk about at this point. You are not wrong, indeed computer vision is the branch of artificial intelligence that focuses on the development of algorithms and systems that can process and understand images and videos. That is, it seeks to replicate the human ability to see and understand visual content, using machine learning and image processing techniques and algorithms.

Machine vision is used in a variety of applications, such as facial recognition, object detection in images, pattern identification in medical images, surveillance and security, among others. It is also an important component in process automation in industry, robotics and autonomous vehicle driving. If you want to see a recent example of machine vision, you can find it in the review we made of GPT-4 news.

Intelligent agents and expert systems.

Although intelligent agents and expert systems share some similarities, they are not exactly the same. What they have in common is that they are both artificial intelligence tools designed to carry out specific tasks and make informed decisions, i.e., they are solutions that automate tasks and functions. However, they differ in how they approach decision making.

An expert system is an algorithm that uses logical rules and domain-specific expert knowledge to perform diagnostic, decision-making and problem-solving tasks. The knowledge is incorporated into the system by defining rules and logical relationships between data. These rules are applied to the input data to infer an output or a recommendation. That is, it will always respond based on what it has learned in its training/coding.

On the other hand, an intelligent agent is a program that actively interacts with its environment to achieve a specific goal. The agent can receive information from the environment, process it and make decisions to achieve its goal. Unlike expert systems, intelligent agents can adapt to and learn from their environment, and adjust their behavior depending on the rewards or punishments they receive. In short, it is context-sensitive and learns from the context, which makes it more adaptive.

Images generated with Microsoft Bing Image Creator.

Click to rate this entry!
(Votes: 0 Average: 0)

Leave a Comment