Hallucination
When an AI model generates plausible-sounding but factually incorrect or fabricated information
What is a Hallucination?
In the context of AI, a hallucination occurs when a language model produces text that sounds confident and convincing but is actually wrong, made up, or unsupported by any real source. The term borrows from psychology, where a hallucination means perceiving something that isn't there.
Imagine asking a well-spoken friend for a book recommendation. They confidently name a title, an author, and even a publication year -- but when you search for it, the book doesn't exist. Your friend wasn't lying; they genuinely believed they were being helpful. LLMs behave similarly: they generate the most statistically likely next words, and sometimes that leads to fabricated facts delivered with full confidence.
Why Does It Happen?
LLMs do not "look things up." They predict text based on patterns learned during training. When the model encounters a question outside its solid knowledge, it fills in the gaps with plausible-sounding guesses rather than saying "I don't know."
How Can We Reduce Hallucinations?
- RAG (Retrieval-Augmented Generation) -- grounding responses in retrieved documents.
- Fine-tuning on high-quality data -- improving the model's factual grounding.
- Human feedback (RLHF) -- training the model to refuse or hedge when uncertain.
- Fact-checking pipelines -- adding a verification layer after generation.
Why Does It Matter?
Hallucinations are one of the biggest barriers to deploying LLMs in high-stakes domains like medicine, law, and finance, where accuracy is non-negotiable.