Host: Hello everyone! Welcome to today's interview. Recently, there's been quite a buzz about AI "hallucinations," especially with DeepSeek-R1, which seems to have a higher hallucination rate than its predecessor, DeepSeek-V3. Today, we're joined by Dr. Li, a senior AI researcher. Welcome, Dr. Li!
Dr. Li: Hello, host! Hello, everyone!
Host: Let's start with the million-dollar question: Why do large language models "hallucinate"? Can you break it down for us in plain English?
Dr. Li: You see, large language models are like super-powered conversation completers. Give them the first half of a sequence, say, a question, and they'll predict the second half (say, an answer) based on their massive knowledge network. They learn like our brains do – they can't remember everything word-for-word, so they compress and generalize, grabbing the gist and finding patterns.
Here's a fun contrast: Ask them "How tall is Yao Ming?" and they'll nail it because that's such famous knowledge, this data point is practically carved in stone in their memory (represented in the model's parameter weights). But ask them "How tall is Old Wang from next door?" and they're stumped because they've never met Old Wang! But here's the kicker – they won't just say "I don't know." So what do they do? They "make up" a reasonable height based on what they know about the range of human heights. That's a hallucination for you!
Host: Wow, that's some impressive guesswork! But isn't this kind of making things up pretty problematic?
Dr. Li: Not necessarily! In a way, hallucination is imagination (for better or worse) – it's where creativity lies! Think about it: all those great literary works, artistic masterpieces – aren't they all flights of fancy, products of imagination? If everything had to match reality closely, art would just be photography, and where's the fun in that?
You know, Yuval Harari makes a fascinating point in "Sapiens" – humans became Earth's dominant species precisely because we could "tell stories," creating myths, religions, nations, and money – things that don't physically exist. These are all "hallucinations," but they're the driving force behind civilization!
Host: When you put it that way, hallucinations sound pretty important! But let's talk about DeepSeek-R1. Its hallucination issue seems quite serious.
Dr. Li: Indeed, it is! The academic consensus used to follow OpenAI's view that reinforced reasoning would significantly reduce hallucinations. I remember discussing this with a head honcho at an LLM unicorn who was particularly excited about reasoning's potential to curb hallucinations. But R1's performance threw us a curveball!
According to Vectara's tests, R1's hallucination rate is more than 3 times higher than its foundation model V3's – 14.3% compared to 3.9%. This definitely correlates with its prolonged "Chain of Thought" (CoT) enabled by reinforcemnnt learning for reasoning. R1 is absolutely brilliant at reasoning, math and coding, as well as poetry and storytelling, but this currently comes with the "side effect" of increased hallucinations in things like translation and summarization.
More specifically, there are several reasons for R1's increased hallucinations.
First, the standard hallucination tests use summarization tasks, something base models are already pretty good at. In this case, reinforcement learning can backfire – it's like using a cannon to swat a fly!
Second, R1's reinforced reasoning chains weren't specifically optimized for straightforward tasks like summarization, translation, or news writing that demand strict factual accuracy. Instead, it tries to add various layers of thinking to every task. Looking at its transparent CoT (ChainOfThought) printout, we see it tirelessly analyzing even simple instructions from multiple angles. This overcomplication of simple tasks can lead to deviations and hallucinations.
During R1's reinforcement learning for NLP-related tasks, it seems the model was rewarded more heavily for creativity, leading it to be more imaginative – and consequently more prone to straying from facts. For mathematical and coding tasks, R1's supervision came from gold standards (test answers or code test cases). But for humanities tasks, they used V3 or V3's reward model to judge quality, and the current system seems to clearly favor creativity.
Moreover, user feedback typically tends to focus and encourage creativity. Most people aren't sensitive to hallucinations, especially when they're wrapped in the model's smooth, fluent language. For most frontline developers, this kind of user feedback naturally pushes them to enhance creativity rather than tackle the thorny problem of hallucinations.
Host: So, you are saying that R1's hallucination problem rooted in its over-enthusiastic reasoning? What's the real relationship between reinforced reasoning ability and hallucinations?
Dr. Li: It's still a puzzle – there's not seem to be simple correlation. Look at R1, a leading reasoning model, versus Claude 3.5 Sonnet, a top non-reasoning model. Surprisingly, Sonnet still has a higher hallucination rate than R1! But when we compare R1 to its base model V3, we see clearly that adding reasoning significantly increased hallucinations.
It may well be about the model's "personality." R1, with its powerful reinforcement learning, loves "divergent thinking." Give it a simple prompt, and it'll spin out ideas like there's no tomorrow – its CoTs could run on like crazy! This suggests that while R1 was powering up its creativity, it inevitably amplified creativity's twin: hallucination.
As a model that excels in both STEM and humanities, R1 performs differently across tasks. In mathematics and coding, where more rigorous reasoning is required, there's little room for hallucination. But in language and creative tasks, especially in the summarization tests, hallucinations become more prominent. It's largely a side effect of R1's supercharged linguistic creativity.
Technically speaking, R1 automatically adds lengthy CoTs to simple user instructions, essentially complicating straightforward tasks. Its CoTs (like internal monologue of an entity following instructions) change the conditional part of the autoregressive probability model before generating answers, naturally affecting the final output. Compare:
V3: query → answer R1: query+CoT → answer
For tasks that V3 already handles well, like summarization or translation, any lengthy CoT guidance might lead to deviation or embellishment, creating fertile ground for hallucinations.
Host: So where do R1's hallucinations mainly occur?
Dr. Li: Think of R1's abilities as split between "arts" and "sciences." In "science" areas like math and coding, its logic is fairly strong and hallucinations are relatively rare. But in "arts" areas like language, hallucinations become more noticeable.
R1's most impressive achievement compared to the first LLM reasoning model O1 is successfully extending mathematical and coding reasoning capabilities into creative writing, especially in Chinese. The internet is full of R1's brilliant literary works. In terms of wordplay and literary prowess, it clearly surpasses 99% of humans – even graduate students in literature and classical Chinese professors sing its praises.
But watch what happens when you ask it to do a simple summary – it can't help but "get creative," often "inventing" details not present in the original text. It's like its "arts" abilities are too powerful, a case of "too much of a good thing."
Host: That's an interesting perspective. Do all language tasks require creativity?
Dr. Li: Language tasks actually fall into two categories: ones that need high creativity, like poetry and fiction writing, and ones that demand high factual accuracy, like news reporting, translation, or summarization. R1 excels at the former, which was likely the development team's focus, but this creates side effects in the latter as it is today.
It reminds me of the old Chinese saying about translation needing to be "faithful, expressive, and elegant" – achieving all three has always been challenging. We see many examples where elegance is prioritized over faithfulness, like the use of hyperbole in literary works. We also see the opposite, like Lu Xun's advocacy for so-called "rigid translation."
Interestingly, humans have always had double standards here, but we have a mental switch we can flip at will. When watching movies or reading novels, we flip towards creativity and don't fuss about factual accuracy. But switch to news channels, and we have zero tolerance for falsehoods.
Host: People tend to believe content that appears logically coherent and detailed, so the potential harm from AI hallucinations could be significant. What should we ordinary folks do about AI hallucinations?
Dr. Li: While many people are starting to notice and become wary of these hallucinations amid their amazement at LLM's creativity, most are still mesmerized by its creative brilliance. We need to increase public awareness of AI hallucinations. I suggest a two-pronged approach:
Stay Alert: Don't take everything the model says as granted, especially factual claims. Hallucinations most commonly occur with names, places, times, locations, and other entities or numerical data.
Cross-Verify: For important details, check original sources online or consult experts to see if the claims align.
Guide the Model: When asking questions, add constraints like "please stay faithful to the original text" or "please verify facts." This can at times help reduce hallucinations.
Embrace Creativity: If you're looking for inspiration or creative ideas, model hallucinations can be a delightful surprise!
Think of AI hallucinations as "possibilities in parallel universes." What it makes up might not be true in our world, but could be true in another! It's like how novelists write fictions – while it cannot stand fact checking, it's a kind of "artistic truth." Just like novels arise from life but transcend it, AI arises from data but transcends it. AI compresses data into knowledge and common-sense network, not necesarily true to individual facts – that's what databases are for.
Host: This reminds me of what people often say: AI models aren't just "talking nonsense" – they're "talking nonsense seriously"!
Dr. Li: Haha, that's exactly it! AI hallucinations are its "educated guesses," based on the massive knowledge and patterns it's learned. The hallucinations are by noway completely random – they have internal constraints that make them seamless and convincing, but also more deceptive. Newcomers to AI need to be especially careful not to trust everything at their face value.
For regular users, understanding the nature of hallucinations is needed. For example, when asking about well-documented facts like "How long is the Yangtze River?" models won't make mistakes because these facts are firmly encoded in their parameters. But ask about an obscure creek or fictional river, and the model will activate its "reasonable completion" mechanism and make something up.
Host: Following your logic, human language itself prepares for a breeding ground for hallucinations.
Dr. Li: You could say that. Language enabled humans to create things which do not exist in the physical world, such as myths, religions, states, corporations, currency, and abstract concepts like ideals and beliefs. Harari emphasizes in "Sapiens" that story-telling (i.e. typical hallucinations) were fundamental to civilization: language enabled human story-telling abilities. Hallucinations catalyzed civilization. Humans are the only entities capable of 'lying' (besides LLMs).
Host: What about the future? Is there a way to maintain creativity while reducing hallucinations?
Dr. Li: This is definitely one of the "ultimate challenges" in AI! People are working on various solutions, including:
More Refined Training: During training, treat different types of tasks differently, teaching the model when to be strict and when to be creative.
Task-Specific Fine-tuning/Reinforcement Learning can help balance this contradiction. Tasks like summarization, paraphrasing, translation, and reporting need special care because they require both some creativity (like style) and strict factual accuracy.
Specifically, R1's training pipeline has four stages: fine-tuning 1, reinforcement 1, fine-tuning 2, and reinforcement 2. Reinforcement 2 mainly focuses on human preference alignment. Currently, this process seems to favor creativity over faithfulness, which could be rebalanced later. Perhaps more importantly, in stage three (i.e. fine-tuning 2), we could strengthen constraints for different tasks – for example, increasing supervised data for summarization to encourage faithful, straightforward results.
Routing: In the future, there will be a "model dispatcher" that assigns different models based on task type. Simple tasks could go to V3 or use tools, while complex tasks requiring deeper thinking go to R1.
For instance, arithmetic tasks should just use simple code calculations, equivalent to using a calculator. That's not how it works now – yesterday I tested a nine-digit multiplication, and R1 spent over three minutes thinking, producing CoT that could stretch down the street, breaking down the reasoning step by step. While the answer was correct, using such computationally expensive CoT for arithmetic instead of a simple function call is unreasonable. A one-line calculation code would do the job – no need to waste so much computing resource and tokens on explicit reasoning. These are foreseeable routing improvements, especially in the age of AI agents which can use all kinds of tools or applications. R1's CoT does not need to handle everything – besides hallucinations, compute-burning CoT is also not environmentally friendly.
Host: Thank you, Dr. Li, for this fascinating discussion! Today's interview has given us a much deeper understanding of AI hallucinations.
Dr. Li: My pleasure! It's been great chatting with you!
【相关】