technically logo

Learn

Company Breakdowns
What technical products actually do and why the companies that make them are valuable
Knowledge Bases
In-depth, networked guides to learning specific concepts
Posts Archive
All Technically posts on software concepts since the dawn of time
Terms Universe
The dictionary of software terms you've always wanted

Explore knowledge bases

AI, it's not that ComplicatedAnalyzing Software CompaniesBuilding Software ProductsWorking with Data Teams
Loading...

Meet Technically

Technically exists to help you get better at your job by becoming more technically literate.

Learn more →

Solutions for Teams

For GTM Teams
Sell more software to developers by becoming technically fluent.
For Finance Professionals
Helping both buy-side and sell-side firms ask better technical questions.
General Team Inquiries
Volume discounts on Technically knowledge bases.
Loading...
Pricing
Sign In
← Back to Universe

AI Hallucination

aiintermediate

Loading image...

What is AI hallucination?

Hallucination refers to a situation where an AI model (and in particular large language models) generates outputs that may be irrelevant, nonsensical, or incorrect based on the input provided. This often occurs when the AI system is unsure of the answer, relies too much on regurgitating its training data, or lacks a proper “understanding” of the subject matter.

Here's what makes AI hallucination particularly tricky: the AI doesn't know it's wrong. It's not lying or trying to deceive you—it's just confidently filling in gaps with patterns it learned during training, even when those patterns don't apply to your specific situation. The AI doesn’t necessarily know the difference between true and hallucinated…otherwise it simply wouldn’t do it.

What are examples of AI hallucination?

AI hallucinations range from harmless mistakes to potentially dangerous misinformation:

Fake Citations

AI models often generate academic citations that look completely legitimate—proper formatting, believable author names, real journal titles—but the papers don't actually exist. This was famously demonstrated when a lawyer used ChatGPT and ended up citing fake cases in court.

Non-Existent Companies

Ask about a company that sounds plausible but doesn't exist, and the AI might give you a detailed company profile, complete with founding date, headquarters location, and business model.

Incorrect Historical Events

Models might confidently describe historical events that never happened, or give wrong dates, locations, or participants for real events. One early example: ChatGPT claiming no African country starts with the letter "K" (forgetting Kenya).

Made-Up Technical Specifications

When asked about technical details of products, AI might provide precise-sounding specifications that are completely fabricated.

Loading image...

Why do AI models hallucinate?

AI models hallucinate because of how they're fundamentally designed. They're pattern-matching machines that predict what should come next based on what they've seen before, not truth-seeking systems that verify facts against reality.

How Language Models Work

Language models work by analyzing sequences of words and predicting what comes next. They learn patterns from massive amounts of training data—associations between words that normally appear together. But sometimes these patterns don't accurately reflect reality.

The core issue comes down to pattern completion over truth-seeking. When you ask an AI about "the capital of Mars," it knows that questions about capitals usually have answers, so it might confidently tell you "Olympia" because that sounds like a plausible capital city name, even though Mars doesn't have a capital.

Loading image...

This happens for several interconnected reasons related to their training:

Sometimes, just like us, models learn patterns that don't accurately reflect reality. One of the reasons this happens is pretty simple: bad training data. If our model's training data had an unusually high number of Bigfoot sightings, it's more likely to say Bigfoot is definitely real when asked. This is what researchers call overrepresentation in the training data.

The opposite problem also occurs: underrepresentation in training data. When the model has seen too few examples of a topic, it doesn't have much information to go off of and might start making stuff up to fill in the gaps.

To generate new content, language models basically try to recreate patterns they saw during training. Whether the text they generate is true or false is kinda beside the point. All that matters is their output matches the training data. There's not a perfect formula for doing this—that's just the nature of probabilities. Even if we had a perfect training dataset that somehow captured reality with zero bias or inaccuracy, models would still hallucinate.

Loading image...

How are researchers reducing hallucination?

Most researchers say it's impossible to stop models from hallucinating completely. But they've come up with a handful of techniques for reducing hallucination, with varying levels of success. The most effective ones so far are reinforcement learning from human feedback, retrieval-augmented generation, and chain-of-thought prompting.

Let’s dive a little deeper into these techniques:

Reinforcement Learning from Human Feedback (RLHF)

RLHF has become standard practice for improving AI-generated text quality. This method teaches models to produce factually accurate and logically sound responses by incorporating feedback from human users.

The process works by having users evaluate multiple responses to the same prompt and select the best answer. The model updates its parameters to reflect these preferences. Repeated over many prompts and topics, this trains the model to give answers that are logical and accurate, rather than simply predicting the next word in a sequence.

Practically, most models you’d use off the shelf have already had some degree of RLHF integrated into their training processes.

Retrieval-Augmented Generation (RAG)

RAG makes AI models more accurate by giving them access to specific, verified data sources. Rather than relying solely on training data, RAG attaches relevant information directly into the model's context window when you prompt it. Ideally, RAG gives your model all the information it needs to answer your question in the question itself. That means your model doesn't have to resort to hallucination to fill any knowledge gaps.

Chain-of-Thought Prompting

Chain-of-thought (CoT) prompting helps models break down complex problems into intermediate steps—like showing your work in school. Instead of jumping straight to an answer, the model generates a reasoning chain first, allowing you to verify each step.

However, models can still hallucinate within their chain of thought, sometimes making errors in intermediate steps that cascade through the rest of the reasoning. CoT prompting reduces hallucination but isn't a complete solution.

How can you prevent AI hallucination?

You can't eliminate hallucination entirely, but you can significantly reduce it. Here are some practical strategies:

Instruct the model

Sort of like us, models hate to admit when they don't know something. They'd much rather make up some bullshit response instead. Strange as it sounds, sometimes these models just need to hear that it's okay not to know the answer. No, seriously—this actually makes them less likely to hallucinate. I like to add this line to the end of my prompts: "If you don't know or are unsure of your answer, say 'I don't have enough information.'"

Ask for sources

Request that models back up claims with sources. This makes fact-checking easier, though be aware models can also hallucinate citations. Which is why you should also check the citations! Never trust AI output for critical decisions without verification. Cross-check facts, especially dates, statistics, and specific claims.

Use web search tools

Many AI models can now search the web for current information, helping fill gaps in their training data and giving you a list of visited websites to verify. Anything that forces the model to tell you specific sources for its information gets you closer to verifying responses.

Use specific, factual prompts

Instead of "Tell me about Company X," try "What is Company X's primary business model according to their latest annual report?" The specificity will help the model find more relevant, real information from its training set instead of making it up.

Include maximum context

Give the model as much relevant information as possible—upload documents, paste in reference materials, or provide detailed background. When models have access to specific sources rather than relying solely on their training data, they're much less likely to make things up.

Loading image...

Frequently Asked Questions About AI Hallucinations

Are some AI models better at avoiding hallucination?

Yeah, but they all do it to some degree. Newer models trained with techniques like Constitutional AI tend to hallucinate less, and some are better at saying "I don't know" when they should. But even the best models can confidently state complete nonsense, so you always need to verify important stuff yourself.

Will AI hallucination ever be completely solved?

Probably not entirely. Some degree of "educated guessing" is fundamental to how these models work—it's how they handle novel situations. But we're getting much better at teaching models when to be uncertain and grounding them in real-time data sources. The goal isn't perfection; it's making hallucinations rare enough that AI becomes reliably useful.

How do companies deal with hallucination in production?

Lots of safeguards: human review for important outputs, confidence scoring systems, fact-checking against verified databases, and very clear disclaimers about AI-generated content. The higher the stakes, the more verification layers you add. Nobody's just letting AI make critical business decisions without human oversight. At least we hope.

Why don't AI models just say "I don't know"?

Because AI models are trained to always provide complete responses. They learn patterns from examples where questions are followed by answers, so they default to answering even when they're uncertain. The training process doesn't naturally teach them when to express doubt or admit knowledge gaps.

This is changing as developers realize that saying "I'm not sure about that" is actually more helpful than confidently making stuff up. Modern models can be instructed to acknowledge uncertainty, but it requires explicit prompting—like adding "If you don't know or are unsure of your answer, say 'I don't have enough information'" to your requests.

Read the full post ↗

Why do models hallucinate?

AI models inherently make stuff up. What can we do about it?

Read in the Knowledge Base →

Related terms

ChatGPT

Context Window

Inference

LLM

Loss Function

Machine Learning

Support
Sponsorships
Twitter
Linkedin
Privacy + ToS