Wait What, AI has Hallucinations?
- AnalyzeBrand.com
- Jan 1
- 2 min read
Updated: Jan 16

Wait what? Every day we are constantly being bombarded with predictions that AI will replace almost any job that involves some degree of critical thinking -- now this? Yes, the new AI buzzword is hallucinations, and not the flashback kind in the horror movies, but a digital version of it.
Apparently, these AI hallucinations can occur when the generative AI models are trained on faulty data, lack computational power, or are trained too tightly to match the training set, called overfitting. Regardless of their cause, these hallucinations are unknown to the prompter, as the answer to their query is presented with the hallmark confidence that we have come to expect from ChatGPT responses. There is one catch -- the answer is completely wrong. Worse yet, sometimes the answers are fabricated leveraging fabricated sources and data (Emsley, 2023).
I briefly searched for some stats on the prevalence of hallucinations, and it appears that in the world class AI models, it still happens in less than 3% of queries. Yet, that is a big number if you consider the ramifications, such as diagnosing an uploaded MRI scan, designing a traffic route for a driverless car, or analyzing case law (Dahl et al., 2024). In my opinion, these hallucinations are just growing pains for AI, and they will design mitigation strategies to penalize inaccurate answers (in the computer processing sense) in risky domains like medicine, transport, or law.
To be clear, I'm not trying to poke fun at the human clinical condition of hallucination, I lost my mother to Alzheimer's and experienced her hallucinations with fear and anguish. Rather,
the point I'm trying to make is the anthropomorphic connection that technology buzzwords make to gain attention of everyday consumers. For instance, I see a bevy of social media posts in which users are debating AI hallucination causes and effects, and whose AI generative model is the best - or in this sense lucid. Oh no, did I just create another buzzword "AI lucidity." Anyway, I expect more anthropomorphic buzzwords as AI creeps more and more into our lives, as the model developers will need a way to relate these complex AI concepts to everyday folks.
References:
Emsley, R. (2023). ChatGPT: these are not hallucinations–they’re fabrications and falsifications. Schizophrenia, 9(1), 52.
Dahl, M., Magesh,V., Suzgun,M., & Ho,D. (2024). Large legal fictions: Profiling legal hallucinations in large language models, Journal of Legal Analysis, 16)1), 64–93, https://doi.org/10.1093/jla/laae003