3.3. AI hallucinations
Hallucinations in generative artificial intelligence (GAI) results, such as language models or image generators, refer to cases where these systems produce incorrect, inconsistent or meaningless information relative to the input information provided or actual expectations. This phenomenon occurs for several reasons, including limitations in data training, algorithm errors or context incomprehension. For example:
- Language models:
- Fictional information: a language model could generate an incorrect date for a historical event if it has not been properly trained with accurate historical data. For example, if a user asks, “When did World War II start?” and the model incorrectly answers “1942” instead of “1939”.
- Invented details: in text generation, a model could invent names, places or facts that do not exist. For example, the model could create a character when writing a story: “President of the Republic of Fantasy”, a country that does not exist.
- Image generators:
- Inconsistent visuals: in creating images from textual descriptions, a system can generate a picture of a “bird-winged flying cat”. This could be the result of AI trying to merge the characteristics of different animals into a single image.
- Impossible combinations: ua model can create an image of a landscape with elements that usually do not coexist, such as a snow-covered beach.
- Predictive modeling:
- Wrong predictions: in predictive models, such as those used in finance, a model can hallucinate exaggerated results, such as predicting a 500% increase in an IBEX 35 company’s share without any realistic basis, potentially due to data biases or overadjustments.
These examples illustrate how hallucinations can manifest in ways that make the results generated by GAI less useful, unreliable or directly misleading. It is important that GAI system developers implement controls and validations to minimize these issues and improve the reliability of their models.