ClarityAI LogoClarityAI
Ethics & Safety/hallucination

Hallucination

When an AI confidently states something that is factually wrong or completely made up.

What it actually means

Hallucination happens because LLMs are trained to generate plausible-sounding text, not to verify facts. When a model doesn't "know" something, it doesn't say "I don't know" — it fills in the gap with whatever pattern fits best, even if that pattern is fiction. This can range from subtle errors (wrong dates, slightly off statistics) to completely fabricated citations, people, and events — stated with total confidence.

Real-world analogy

Imagine asking a colleague for a reference. Instead of saying "I don't have one," they invent a plausible-sounding author, journal, and title on the spot — completely convincingly. That's hallucination. The model isn't lying intentionally; it genuinely has no mechanism for knowing the difference between what's real and what's plausible.

Common misconception

Hallucination is not a bug that will simply be "fixed." It's a fundamental property of how these models work. Newer models hallucinate less, but no model is immune. Always verify AI-generated facts before publishing or acting on them.