What are the hallucinations of AI models? – VeryComputer

Artificial Intelligence (AI) models have the potential to produce remarkable hallucinations, resulting from their highly unique ways of perceiving the world around them. AI models use computer algorithms to mimic the human mind, often resulting in unexpected and lucid visions that could revolutionize how we interact with the world.

A few days ago, when reviewing what’s new in GPT-4, I mentioned hallucinations as one of the main problems of artificial intelligence models.. The term chosen is, in truth, quite explanatory, but even so, I think it is interesting to delve a little deeper into it to understand what it consists of, why it means that we should never take the outputs of an artificial intelligence as 100% reliable and definitive, the reasons why they occur, and what measures are being taken to try to mitigate them. So, let’s start at the beginning.

What are hallucinations?

In the context of artificial intelligence, hallucinations refer to anomalous behavior in which a model generates false information or perceptionswithout an external source of stimulus to justify its existence. We have spoken, on more than one occasion, of the tendency of some generative tools (not only of text, but also of images and many other fields) to provide invented/incoherent information. From the horror faces of DALL-E 2 to the invented poems of Miguel Hernandez, but also the erroneous interpretations of the data received by sensors can be considered hallucinations.

Hallucinations are a problem in any artificial intelligence-based system that must infer a response, of whatever type, from to interpretation of its input data based on its learning process. However, the importance of these varies substantially depending on the context.. Because, of course, it is not the same for a text generative AI to invent a poem and attribute it to a real author, or to give a wrong definition of a word, as it is for the expert system responsible for the autonomous driving of a vehicle to misinterpret the information from its sensors and cause an accident.

The advent of artificial intelligence in critical applications has therefore put the spotlight on a problem that was previously identified, but that has become more important than everbecause fields such as security, medicine and the like cannot risk relying on artificial intelligence if there is a risk that hallucinations will lead to incorrect responses without these being inferred, at least in principle, based on the data fed into the model.

What are hallucinations in AI models?

Why do they occur?

There are several reasons why an artificial intelligence model may suffer from hallucinations, which are usually different depending on whether we are talking about supervised learning models and unsupervised learning models. This was explained in the article on the fundamentals of artificial intelligence.but let us recall it in an abbreviated form.

  • Supervised learning: the model is trained with labeled data along with the expected output. The goal of the model is to learn to map the input data to their corresponding expected outputs, in order to predict the output for new input data. A simple example, we feed the model a dataset of photos of dogs and cats, labeling each one with the type of pet shown in the photo. The goal is for the model to learn to distinguish between dogs and cats in order to identify them in the images we provide when it is ready for action.
  • Unsupervised learning: in this case, as you have probably already imagined, the model is trained with unlabeled data. This type of training is used when we want the model to be able to find hidden patterns and structures in the data by itself. For example, if you are analyzing social network data to identify groups of users with similar interests using unsupervised learning, the model will look for hidden patterns in the data that suggest that certain users have similar interests, without being provided with any specific label.

Hallucinations in models with supervised learning.

The major cause of hallucinations in a supervised learning model is the overadjustment (overfitting), which occurs when the model fits the training data too closely and loses the ability to generalize to new data. Returning to the example of the dog and cat pictures, if the model overfits the training images, it may memorize them instead of learning useful patterns that it can apply to new images. As a result, the model may correctly classify training images, but fail on new images that it has not seen before.

Hallucinations in models with unsupervised learning.

In this case, the most commonly cited cause for the model to “hallucinate” is the lack of sufficient information in the learning-input data. If the model does not have enough information to fully understand the patterns in the data, it may generate incorrect information.

Also of particular importance in unsupervised learning models is the presence of noise, i.e., information that is not useful, but may lead the model to detect false patterns that it will subsequently employ when processing the input data.

Other common causes in both types of learning

There are some problems that can occur in both cases (although some of them may be more common in one type of learning than in the other). These are the most common ones:

  • Insufficient training: based on its function and its complexity, a model may need a huge dataset for its training to be sufficient. Otherwise, it may not be able to obtain all the information necessary to correctly identify patterns.
  • Biased datasets: the datasets used in training should be as diverse as possible. Imagine, for example, a model that has to analyze photographs by identifying faces. If the AI responsible for this function has only been trained using photos of people of one race-ethnicity, it is very likely to make mistakes when processing images with people of other races.
  • Model complexity: if the model is too complex, it may learn to recognize patterns that are not important or do not have a direct correspondence with reality, which may lead to the generation of false information.
  • Algorithm design flaws.: if there are errors in the model code, this can also cause hallucinations. These errors may be due to typographical errors, logic errors, or problems with the way the data is processed.

What are hallucinations in AI models?

How can they be avoided?

At this point, you will already be fully aware that we are talking about a very complex problem and that, therefore, it has no simple solution. However, there are good practices and techniques that can substantially reduce the risk. of a model experiencing hallucinations.

The first thing is, of course, to start from a large and well-edited dataset. We must take into account, for this purpose, that it should be as representative and diverse as possible, minimize as much as possible the amount of noise and, of course, not use the same data for the training process and for the validation process. The latter is known as naive cross-validationand the result of it is that we will get results that may be much better than those we would get from using a different data set. Consequence? We will think that the model works better than it really does.

Another good practice is to regularizationwhich imposes restrictions on the complexity of the model, with which we can avoid problems such as overfitting. There are several regularization techniques, such as LASSO (Least Absolute Shrinkage and Selection Operator), Rigde (also known as contracted regression) and dropout regularization, among others. In all cases, the aim is to reduce complexity and prevent the model from memorizing the training data, instead of being able to generalize from this process.

A very interesting approach is the use of generative adversarial models (GAN, which you are probably familiar from NVIDIA’s GauGAN2) to generate false data that can be used to train the neural network to be more resistant to hallucinations. During training, the generator network and the discriminator network are trained by pitting them against each other. The generator network tries to create data that will fool the discriminator network, while the discriminator network tries to distinguish between the data created by the generator network and the real data. Over time, the generator network learns to generate more realistic data and the discriminative network becomes more effective at detecting generated data.

What are the hallucinations of AI models?

Another approach to reducing hallucinations in artificial intelligence is the use of explainability techniques. These techniques make it possible to understand how the neural network makes decisions and which features of the input data are most important to its learning process. By better understanding how the model works, it is possible to identify the causes of hallucinations and take measures to correct them.

In addition, the following are being developed specific techniques are being developed to reduce hallucinations in critical applications. for safety, such as autonomous driving. In this context, it is essential that the neural network is able to accurately detect objects and situations in the environment to make safe decisions. To achieve this, reinforcement learning techniques are being used that allow the neural network to learn iteratively from the feedback it receives from the environment.

These are just some of the methods and best practices to avoid, or at least mitigate as much as possible, the hallucinations of artificial intelligence models. However, although there are regular advances in this direction, and there are also generative artificial intelligences, such as the new Bing, that always document their responses with the sources, which is an excellent measure to mitigate hallucinations, we must remember that hallucinations can occur at any time. and, consequently, we must always act with certain reservations in the face of their outflows.

This does not mean, of course, that we should rule out the use of artificial intelligence, far from it. Those responsible for the models give priority to finding solutions to this problem, and in many cases the result is reliable. The important thing is, however, not to be overconfident.

Click to rate this entry!
(Votes: 0 Average: 0)
Share!
Leave a Comment