MeanFlow: AI图像生成的降维打击

何恺明团队最新力作,MeanFlow无需预训练、无需蒸馏,仅需一次函数评估 (1-NFE) 即可实现SOTA性能,为高效高质量图像生成开辟新道路。

MeanFlow的核心思想是引入“平均速度场”来直接建模数据点和噪声点之间的转换路径,摆脱了传统扩散模型和流匹配方法对多步迭代的依赖。这项研究在ImageNet 256x256数据集上取得了惊人的 FID 3.43 (1-NFE) 的成绩。核心概念解析

MeanFlow的创新根植于对生成过程基本原理的深刻洞察。它通过引入“平均速度场”和“MeanFlow恒等式”,为单步高效生成提供了坚实的理论基础,有效解决了传统方法的诸多痛点。平均速度场 (Mean Velocity Field)

传统流匹配 (Flow Matching) 方法依赖于建模瞬时速度场𝑣(𝑧𝑡,𝑡),即在特定时间点𝑡状态𝑧𝑡的变化速率。而MeanFlow首创性地引入了平均速度场𝑢(𝑧𝑡,𝑟,𝑡)的概念。

平均速度定义为在时间间隔[𝑟,𝑡]内的平均位移速率:𝑢(𝑧𝑡,𝑟,𝑡)=𝑧𝑡−𝑧𝑟𝑡−𝑟=1𝑡−𝑟∫𝑟𝑡𝑣(𝑧𝑠,𝑠)𝑑𝑠

这里的𝑧𝑠是在时间𝑠的状态。这个定义表明,平均速度不仅取决于当前状态和时间,还取决于一个参考的起始时间𝑟。通过直接建模平均速度,网络学会了预测整个时间段内的“平均路径”,而非瞬时方向。MeanFlow 恒等式

基于平均速度的定义,研究者推导出了一个连接平均速度𝑢和瞬时速度𝑣的核心数学关系——MeanFlow恒等式:𝑣(𝑧𝑡,𝑡)−𝑢(𝑧𝑡,𝑟,𝑡)=(𝑡−𝑟)(𝜕𝑢(𝑧𝑡,𝑟,𝑡)𝜕𝑡+∇𝑧𝑡𝑢(𝑧𝑡,𝑟,𝑡)𝑣(𝑧𝑡,𝑡))

这个恒等式为神经网络的训练提供了理论依据。通过设计损失函数,引导网络学习满足此内在关系,而无需引入额外的启发式方法。由于存在明确定义的目标速度场,理论上最优解与网络的具体结构无关,有助于训练过程更加稳健和稳定。一步生成如何实现?

通过训练神经网络𝑢𝜃直接建模平均速度𝑢,从初始噪声𝑧0(时间𝑡=0) 到目标图像𝑧1(时间𝑡=1) 的生成过程可以简化为单步操作:

𝑧1=𝑧0+𝑢𝜃(𝑧0,0,1)⋅(1−0)

这意味着在推理阶段无需显式计算时间积分,这是传统建模瞬时速度方法所必需的步骤。MeanFlow通过学习平均速度,有效地隐式处理了瞬时速度场可能存在的复杂非线性问题(“弯曲轨迹”),避免了多步ODE求解中累积离散化误差的风险。性能表现 SOTA

MeanFlow 在多个标准图像生成基准上均取得了当前最佳 (SOTA) 或极具竞争力的结果,尤其是在单步或少步生成设定下,其性能提升显著。ImageNet 256x256 (类别条件生成)

在ImageNet 256x256数据集上,MeanFlow展现了卓越的性能。仅需1次函数评估 (1-NFE),FID分数即达到3.43,较之前同类最佳方法有50%到70%的相对提升。在2-NFE设定下,FID进一步降至2.20,已可媲美许多多步方法。

下表详细对比了MeanFlow与其他模型在ImageNet 256x256上的表现 (数据源自论文表2):

MeanFlow (MF)13.43XL/2级骨干
MeanFlow (MF)22.20XL/2级骨干
Shortcut110.601.0B-
IMM2 (含引导)7.771.0B-
iCT1>10 (图示估计)1.0B-
代表性多步SOTA~250x2<2.20XL/2级通常有

CIFAR-10 (无条件生成)

在CIFAR-10 (32x32) 数据集上,MeanFlow同样表现出色。在1-NFE采样下,FID-50K分数为1.95。值得注意的是,MeanFlow在取得此成绩时并未使用任何预处理器,而其他对比方法均使用了EDM风格的预处理器。

下表详细对比了MeanFlow与其他模型在CIFAR-10上的表现 (数据源自论文表3):

MeanFlow (MF)1.95U-Net
EDM2.01EDM风格U-Net
Consistency Models (CM)2.05EDM风格U-Net

创新的CFG集成

无分类器引导 (Classifier-Free Guidance, CFG) 是提升条件生成模型质量的关键技术,但传统应用方式常导致采样计算量翻倍。MeanFlow巧妙地解决了这一问题。作为真实速度场一部分的CFG

MeanFlow将CFG视为底层“真实速度场”的一部分属性进行建模,而非在采样阶段临时组合。研究者定义了一个新的、带引导的真实瞬时速度场𝑣𝑐𝑓𝑔:𝑣𝑐𝑓𝑔(𝑧𝑡,𝑐,𝑡)=𝑤⋅𝑣(𝑧𝑡,𝑐,𝑡)+(1−𝑤)⋅𝑣(𝑧𝑡,∅,𝑡)

其中𝑐是类别条件,𝑤是引导强度。神经网络𝑢𝑐𝑓𝑔,𝜃被训练来直接预测由这个𝑣𝑐𝑓𝑔所诱导出的平均速度场。保持1-NFE的高效引导

由于网络直接学习的是包含了引导信息的平均速度𝑢𝑐𝑓𝑔,因此在采样阶段,无需再进行额外的线性组合计算。只需一次网络调用即可完成带引导的单步生成。这使得MeanFlow在保留CFG效果的同时,依然维持了理想的1-NFE采样性能,真正做到了兼顾效率与质量。意义与价值

MeanFlow的提出不仅仅是一次技术迭代,它对整个生成式AI领域都可能产生深远的影响,有望引领新的研究方向和应用范式。性能飞跃,效率革新

MeanFlow显著缩小了一步与多步扩散/流模型之间的性能差距,证明了高效生成模型同样能达到顶尖质量。挑战传统,简化范式

其“从零开始”训练且无需预训练、蒸馏的特性,极大简化了高性能生成模型的开发流程,有望挑战多步模型的主导地位。降低门槛,普惠AI

更低的计算和开发成本,使得SOTA级别的生成技术能惠及更广泛的研究者和开发者,催生更多创新应用。启迪未来,重塑基础

MeanFlow的成功可能激励学界重新审视生成模型的基础理论,探索更根本、更高效的建模方法。关于本研究

这项名为 MeanFlow: Efficient Flow Matching with Mean Velocity Fields 的开创性研究由以下学者共同完成:

耿正阳 (Zhengyang Geng), 邓明阳 (Mingyang Deng), 白行健 (Xingjian Bai), J. Zico Kolter, 何恺明 (Kaiming He)

他们分别来自卡内基梅隆大学 (CMU) 和麻省理工学院 (MIT) 两所顶尖科研机构。

阅读完整论文 (arXiv:2405.13447)

Q&A on NLP: Chapter I Natural Language and Linguistic Form

Guo: Professor Li, to ease into the discussion, let us begin with some foundational concepts. What exactly do we mean by natural language? What falls under the scope of the field, and where does it sit within the broader discipline of Artificial Intelligence (AI)?

Li:  Natural language refers to the everyday languages we humans speak—English, Russian, Japanese, Chinese, and so on;  in other words,  human language writ large.  It is distinct from computer languages.  Because human conversation is rife with ellipsis and ambiguity,  processing natural language on a computer poses formidable challenges.

Within AI, natural language is defined both as a problem domain and as the object we wish to manipulate.  Natural Language Processing (NLP) is an essential branch of AI, and parsing is its core technology—the crucial gateway to Natural Language Understanding (NLU). Parsing will therefore recur throughout this book.

Computational linguistics is the interdisciplinary field at the intersection of computer science and linguistics.  One might say that computational linguistics supplies the scientific foundations, whereas NLP represents the applied layer.

AI is often divided into perceptual intelligence and cognitive intelligence.  The former includes image recognition and speech processing.  Breakthroughs in big data and deep learning have allowed perceptual intelligence to reach—and in some cases surpass—human‑expert performance.  Cognitive intelligence, whose core is natural language understanding, is widely regarded as the crown jewel of AI.  Bridging the gap from perception to cognition is the greatest challenge—and opportunity—facing the field today.

The rationalist tradition formalises expert knowledge using symbolic logic to simulate human intellectual tasks.  In NLP, the classical counterpart to machine‑learning models comprises linguist‑crafted grammar rules, collectively called a computational grammar.  A system built atop such grammars is known as a rule‑based system. The grammar school decomposes linguistic phenomena with surgical precision, aiming at a deep structural analysis.  Rule‑based parsing is transparent and interpretable—much like the diagramming exercises once taught in a language school.

Figure 1‑1 sketches the architecture of a natural‑language parser core engine.  Without dwelling on minutiae, note that every major module—from shallow parsing through deep parsing—can, in principle, be realised via interpretable symbolic logic encoded as a computational grammar.  Through successive passes, the bewildering diversity of natural language is reduced first to syntactic relations and then to logical‑semantic structure.  Since Chomsky’s distinction between surface structure and deep structure in late 50s, this layered view has become an orthodoxy within linguistics.

Guo: These days everyone venerates neural networks and deep learning. Does the grammar school still have room to live? Rationalism seems almost voiceless in current NLP scholarship. How should we interpret this history and the present trend?

Li:  Roughly thirty years ago, the empiricist school of machine learning began its ascent, fuelled by abundant data and ever‑cheaper computation.  In recent years, deep neural networks have achieved spectacular success across many AI tasks.  Their triumph reflects not only algorithmic innovation but also today’s unprecedented volumes of data and compute.

By contrast, the rationalist programme of symbolic logic has waned.  After a brief renaissance twenty years ago—centred on unification‑based phrase‑structure grammars (PSGs)—computational grammar gradually retreated from the mainstream.  Many factors contributed; among them, Noam Chomsky’s prolonged negative impact warrants sober reflection.

History reveals a pendulum swing between empiricism and rationalism. Kenneth Church famously illustrated the motion in his article A Pendulum Swung Too Far (Figure 1-2).

For three decades, the pendulum has tilted toward empiricism (black dots in Figure 1‑2); deep learning still commands the spotlight. Rationalism, though innovating quietly, is not yet strong enough to compete head‑to‑head.  When one paradigm dominates, the other naturally fades from view.

Guo:  I sense some conceptual confusion both inside and outside the field.  Deep learning, originally just one empiricist technique, has become synonymous with AI and NLP for many observers.  If its revolution sweeps every corner of AI, will we still see a rationalist comeback at all? As Professor Church warns, the pendulum may already have swung too far.

Li:  These are two distinct philosophies with complementary strengths and weaknesses; neither can obliterate the other.

While the current empiricist monoculture has understandable causes, it is unhealthy in the long run.  The two schools both compete and synergise.  Veterans like Church continue to caution against over‑reliance on empiricism, and new scholars are probing deep integrations of the two methodologies to crack the hardest problems in NLU.

Make no mistake: today’s AI boom largely rests on deep‑learning breakthroughs, especially in image recognition, speech, and machine translation.  Yet deep learning inherits a fundamental limitation of the statistical school—its dependence on large volumes of labelled data.  In many niche domains—for instance, minority languages or e‑commerce translation—such corpora are simply unavailable.  This knowledge bottleneck severely constrains empiricist approaches to cognitive NLP tasks.  Without data, machine learning is a bread‑maker without flour; deep learning’s appetite as we all know is insatiable.

Guo: So deep learning is no panacea, and rationalism deserves a seat at the table.  Since each paradigm has its merits and deficits, could you summarise the comparison?

Li: A concise inventory helps us borrow strengths and shore up weaknesses.

Advantages of machine learning

    1. Requires no domain experts (but does require vast labelled data).
    2. Excels at coarse‑grained tasks such as classification.
    3. High recall.
    4. Robust and fast to develop.

Advantages of the grammar school

    1. Requires no labelled data (but does require expert rule writing).
    2. Excels at fine‑grained tasks such as parsing and reasoning.
    3. High precision.
    4. Easy to localise errors; inherently interpretable.

Li: Rule‑based systems shine at granular, line‑by‑line dissection, whereas learned statistical models are naturally strong at global inference. Put bluntly, machine learning often "sees the forest but misses the trees," while computational grammars "see each tree yet risk losing the forest." Although data‑driven models boast robustness and high recall, they may hit a precision ceiling on fine‑grained tasks. Robustness is the key to surviving anomalies and edge cases. Expert‑coded grammars, by contrast, attain high precision, but boosting recall can require many rounds of iterative rule writing. Whether a rule‑based system is robust depends largely on its architectural design. Its symbolic substrate renders each inference step transparent and traceable, enabling targeted debugging—precisely the two pain‑points of machine learning, whose opaque decisions erode user trust and hamper defect localisation. Finally, a learning system scales effortlessly to vast datasets and its breakthroughs tend to ripple across an entire industry. Rule‑based quality, by contrast, hinges on the individual craftsmanship of experts—akin to Chinese cuisine, where identical ingredients may yield dishes of very different calibre depending on the chef.

Both routes confront knowledge bottlenecks. One relies on mass unskilled labour (annotators), the other on a few skilled artisans (grammar experts). For machine learning, the bottleneck is the supply of domain‑specific labelled data. The rationalist route simulates human cognition and thus avoids surface‑level mimicry of datasets, but cannot escape the low efficiency of manual coding. Annotation is tedious yet teachable to junior workers; crafting and debugging rules is a costly skill to train and hard to scale. Talent gaps exacerbate the issue—three decades of empiricist dominance have left the grammar school with a thinning pipeline.

Guo: Professor Li, a basic question: grammar rules are grounded in linguistic form. If semantics is derived from that form, then what exactly is linguistic form?

Li: This strikes at the heart of formalising natural language. All grammar rules rest on linguistic form, yet not every practitioner—even within the grammar camp—has a crisp definition at hand.

In essence, natural language as a symbolic system expresses meaning through form. Different utterances of an idea vary only in form; their underlying semantics and logic must coincide, else communication—and translation—would be impossible. The intuition is commonplace, but pinning down "form" propels us into computational linguistics.

Token & Order — The First‑Level Abstraction
At first glance a sentence is merely a string of symbols—phonemes or morphemes. True, but that answer is too coarse. Every string is segmented into units called tokens (words or morphemes). A morpheme is the smallest pairing unit of sound and meaning. Thus our first abstraction decomposes linguistic form into a sequence of tokens plus their word order. Grammar rules define patterns that match such sequences. The simplest pattern, a linear pattern, consists of token constraints plus ordering constraints.

Guo: Word order seems straightforward, but tokens and morphemes hide much complexity.

Li: Indeed. Because tokens anchor the entire enterprise, machine‑readable dictionaries become foundational resources. (Here "dictionary" means an electronic lexicon.)

If natural language were a closed set—say only ten thousand fixed sentences—formal grammar would be trivial: store them all, and each complete string would serve as an explicit pattern. But language is open, generating unbounded sentences. How can a finite rule set parse an infinite language?

The first step is tokenisation—dictionary lookup that maps character strings to lexicon words or morphemes. Unlimited sentences decompose into a finite vocabulary plus occasional out‑of‑dictionary items. Together they form a token list, the initial data structure for parsing.

We then enter classic linguistic sub‑fields. Morphology analyses the internal structure of multi‑morphemic words. Some languages exhibit rich morphology—noun declension, verb conjugation—e.g., Russian and Latin; others, such as English and Chinese, are comparatively poor. Note, however, that Chinese lacks inflection but excels at compounding. Compounds sit at the interface of morphology and syntax; many scholars treat them as part of "little syntax" rather than morphology proper.

Guo: Typologists speak of a spectrum—from isolating languages such as Classical Chinese (no morphology) to polysynthetic languages like certain Native American tongues (heavy morphology). Most languages fall between, with Modern Chinese and English leaning toward the isolating side: minimal morphology, rich syntax. Correct?

Li: Exactly. Setting aside the ratio of morphology to syntax, our first distinction is between function words/affixes versus content words. Function words (prepositions, pronouns, particles, conjunctions, original adverbs, interrogatives, interjections) and affixes (prefixes, suffixes, endings) form a small, closed set.

Content words—nouns, verbs, adjectives, etc.—form an open set forever producing neologisms; a fixed dictionary can hardly keep up.

Because function words and affixes are frequent yet limited, they can be enumerated as literals in pattern matching. Hence we have at least three grain‑sizes of linguistic form suitable for rule conditions: (i) word order; (ii) function‑word literals or affix literals; (iii) features.

Features — The Implicit Form
Explicit tokens are visible in the string, but parsers also rely on implicit features—category labels. Features encode part‑of‑speech, gender, number, case, tense, etc. They enter pattern matching as hidden conditions. Summarising: automatic parsing rests on (i) order, (ii) literals, (iii) features—two explicit, one implicit. Every language weaves these three in different proportions; grammar is but their descriptive calculus.

Guo: By this metric, can we say European languages are more rigorous than Chinese?

Li: From the standpoint of explicit form, yes. European tongues vary internally—German and French more rigorous than English—but all possess ample explicit markers that curb ambiguity. Chinese offers fewer markers, increasing parsing difficulty.

Inflectional morphology supplies visible agreement cues—gender‑number‑case for nouns, tense‑aspect‑voice for verbs. Chinese lacks these. Languages with rich morphology enjoy freer word order (e.g., Russian). Esperanto’s sentence "Mi amas vin" (I love you) can permute into six orders because the object case ‑n never changes.

Chinese, conversely, evolved along the isolating path, leveraging word order and particles. Even so, morphology provides tighter agreement than particles. Hence morphology‑rich languages are structurally stringent, reducing reliance on implicit semantics.

Guo: People call Chinese a "paratactic" language—lacking hard grammar, leaning on meaning. Does that equate to your notion of implicit form?

Li: Precisely. Parataxis corresponds to semantic cohesion—especially collocational knowledge within predicate structures. For example, the predicate "eat" expects an object in the food category. Such commonsense often lives in a lexical ontology like HowNet (founded by the late Professor Dong Zhendong).

Consider how plurality is expressed. In Chinese, "brother" is a noun whose category is lexically stored. Esperanto appends ‑o for nouns and ‑j for plural: frato vs. fratoj. Chinese may add the particle (‑men), but this marker is optional and forbidden after numerals: "三个兄弟" (three brothers) not "*三个兄弟们". Here plurality is implicit, inferred from the numeral phrase.

Guo: Lacking morphology indeed complicates Chinese. Some even claim Chinese has no grammar.

Li: That is hyperbole. All languages have grammar; Chinese simply relies more on implicit forms. Overt devices—morphology, particles, word order—are fewer or more flexible.

Take omission of particles as an illustration. Chinese frequently drops prepositions and conjunctions. Compare:

      1. 对于这件事, 依我的看法, 我们应该听其自然。
        As for this matter, in my opinion, we should let nature take its course.
      2. 这件事我的看法应该听其自然。
        * this matter my opinion should let nature take its course.
        (Unacceptable as a word‑for‑word English rendering.)

Example 2 is ubiquitous in spoken Chinese but would be ungrammatical in English. Systematic omission of function words exacerbates NLP difficulty.

Guo: What about word order? Isolation theory says morphology‑poor languages have fixed order—Chinese is labelled SVO.

Li: Alas, reality defies the stereotype. Despite lacking morphology and often omitting particles, Chinese exhibits remarkable word‑order flexibility. Consider the six theoretical permutations of S, V, and O. Esperanto, with a single object case marker ‑n, allows all six without altering semantics. Compare English (no case distinction for nouns, but marking subject pronouns from obect cases) and Chinese (no case at all):

Order Esperanto English Chinese
SVO Mi manĝis fiŝon I ate fish 我吃了鱼
SOV Mi fiŝon manĝis * I fish ate 我鱼吃了
VOS Manĝis fiŝon mi * Ate fish I ?吃了鱼我
VSO Manĝis mi fiŝon * Ate I fish * 吃了我鱼
OVS Fiŝon manĝis mi * Fish ate I ?鱼吃了我
OSV Fiŝon mi manĝis Fish I ate 鱼我吃了

Chinese sanctions three orders outright, two marginally (marked “?”), and forbids one (“*”). English allows only two. Thus Chinese word order is about twice as free as English, even though English possesses case distinction on pronouns. Hence morphology richness does not always guarantee order freedom.

Real corpora confirm that Chinese is more permissive than many assume. Greater flexibility inflates the rule count in sequence‑pattern grammars: every additional order multiplies pattern variants. Non‑sequential constraints can be encoded inside a single rule; order itself cannot.

A classic example is the elastic placement of argument roles around "哭肿" (cry‑swollen):

张三眼睛哭肿了。
眼睛张三哭肿了。
哭肿张三眼睛了。
张三哭肿眼睛了。
哭得张三眼睛肿了。
张三哭得眼睛肿了。
…and so on.

Such data belie the notion of a rigid SVO Chinese. Heavy reliance on implicit form complicates automatic parsing. Were word order fixed, a few sequence patterns would suffice; flexibility forces exponential rule growth.

壹 自然语言与语言形式

郭: 李老师, 由浅入深, 我们还是从一些基本概念开始谈 起吧。什么是自然语言? 自然语言领域包括哪些内容? 它在人工智能里面的定位是怎样的呢?

李: 自然语言 (natural language) 指的是我们日常使用的语言, 英语、俄语、日语、汉语等, 它与人类语言是同义词。自 然语言有别于计算机语言。人脑处理的自然语言常有省略和歧义, 这给电脑 (计算机) 的处理提出了挑战。

在人工智能界, 自然语言是作为问题领域和处理对象提出来的。自然语言处理是人工智能的重要分支, 自然语言解析是其核心技术和通向自然语言理解的关键。语言解析是我 们接下来要探讨的、贯穿全书始终的话题。

计算语言学是计算机科学与语言学的交叉学科. 计算语言学和自然语言处理是同一个专业领域的两个剖面. 可以 说, 计算语言学是自然语言处理的科学基础, 自然语言处理是计算语言学的应用层面。

人工智能主要有感知智能 (perceptual intelligence) 和认 知智能 (cognitive intelligence) 两大块. 前者包括图像识别  (image recognition) 和语音处理 (speech processing)。随着 大数据和深度学习 (deep learning) 算法的突破性进展, 感知智能很多方面已经达到甚至超过人类专家的水平。认知智能的核心是自然语言理解, 被一致认为是人工智能的皇冠。从感知跃升到认知是当前人工智能所面临的最大挑战和机遇。

理性主义直接把领域专家的经验形式化, 利用符号逻辑来模拟人的智能任务。在自然语言处理领域, 与机器学习模型平行的传统方法是语言学家手工编码的语言规则。这些规则的 集合称为计算文法。由计算文法支撑的系统叫作规则系统 (rule system)。文法学派把语言学家总结出来的语言规则形式化, 从而对语言现象条分缕析, 达到对自然语言深层次的结构 解析. 规则系统试图模拟人的语言分析理解过程。规则系统解析自然语言是透明的、可解释 (interpretable) 的。这个过程很 像是外语文法老师在课堂上教给学生的句子分析方法。

图1—1是一张自然语言解析器 (parser) 核心引擎 (core engine) 的架构图。不必深究细节, 值得说明的是, 从浅层解析 (shallow parsing) 到深层解析 (deep parsing) 里面的各主要模块, 都可以用可解释的符号逻辑 (symbolic logic) 以计算文法的形式实现。千变万化的自然语言表达, 就这样一步一 步地从句法关系 (syntactic relation) 的解析, 进而求解其深层 的逻辑语义 (logic semantics) 关系。这个道理早在1957年乔 姆斯基 (Chomsky) 语言学革命中提出表层结构 (surface structure) 到深层结构 (deep structure) 的转换之后, 就逐渐成为语言学界的共识了。

郭: 现在大家都在推崇神经网络 (neural network) 深度学习, 文法学派还有生存空间吗?  理性主义在自然语言领域已经听不到什么声音了。怎样看待这段历史与趋向呢?

李: 大约从30年前开始到现在, 经验主义机器学习这一 派, 随着数据和计算资源的发展, 天时地利, 一直在向上走。尤其是近年来深层神经网络的实践, 深度学习在不少人工智能任务上取得了突破性的成功。经验主义的这些成功, 除了 神经网络算法的创新, 也得益于今非昔比的大数据和大计算的能力。

与此对照, 理性主义符号逻辑则日趋式微。符号逻辑在自然语言领域表现为计算文法。文法学派在经历了20年前 基于合一 (unification) 的短语结构文法 (Phrase Structure  Grammar, PSG) 创新的短暂热潮以后, 逐渐退出了学界的主 流舞台。形成这一局面的原因有多个, 其中包括乔姆斯基对于文法学派长期的负面影响, 值得认真反思。

回顾人工智能和自然语言领域的历史, 经验主义和理性 主义两大学派此消彼长, 呈钟摆式跌宕起伏。肯尼斯丘吉 (Kenneth Church) 在他的「钟摆摆得太远」(A Pendulum  Swung Too Far) 一文中, 给出了一个形象的钟摆式跌宕图 (图1—2).

最近30年来, 经验主义钟摆的上扬趋势依然不减 (见图 1—2的黑点表示)。目前来看, 深度学习仍在风头上。理性主义积蓄多年, 虽然有其自身的传承和创新, 但还没有到可以与经验主义正面争锋的程度。当一派成为主流时, 另一派自然淡出视野。

郭: 我感觉业内业外有些认知上的混乱。深度学习本来只是经验主义学派的一种方法, 现在似乎在很多人心目中等价于人工智能和自然语言处理了。如果深度学习的革命席卷 人工智能的方方面面, 会不会真地要终结理性主义的回摆呢? 正如丘吉教授所言, 经验主义的钟摆已经摆得太远了。

李: 我的答案是否定的。这是两个不同的哲学和方法论, 各自带有其自身的天然优势和劣势, 不存在一派彻底消灭另 一派的问题。

当前学界经验主义一面倒的局面虽然事出有因, 但并不 是一个健康的状态。其实, 两派既有竞争性, 也有很强的互补 性。丘吉这样的老一辈有识之士一直在警示经验主义一边倒的弊端, 也不断有新锐学者在探索两种方法论的深度融合, 以 便合力解决理解自然语言的难题。

毫无疑问, 这一波人工智能的热潮很大程度上是建立在深度学习的突破上, 尤其是在图像识别、语音处理和机器翻译方面取得的成就上。但是, 深度学习的方法仍然保留了统计学派的一个根本局限, 就是对海量标注数据 (labeled data) 的依赖。在很多细分领域和任务场景, 譬如, 少数族裔语言的解 析、电商数据的机器翻译, 海量标注或领域翻译数据并不存 在。这个知识瓶颈严重限制了经验主义方法在自然语言认知任务方面的表现。没有足够的标注数据, 对于机器学习就是无米之炊。深度学习更是如此, 它的胃口比传统机器学习 更大。

郭 : 看来深度学习也不是万能的, 理性主义理应有自己的一席之地。说它们各有长处和短板, 您能够给个比较吗?

李: 归纳一下两派各自的优势与短板是很有必要的, 可以取长补短。

机器学习的优势包括:

(1) 不依赖领域专家 (但需要大量标注数据);
(2) 长于粗线条的任务, 如分类 (classification);
(3) 召回 (recall) 好;
(4) 鲁棒 (robust), 开发效率高。

与此对照, 文法学派的优势包括:

(1) 不依赖标注数据 (但需要专家编码);
(2) 长于细线条的任务, 譬如解析和推理;
(3) 精度(precision)好;
(4) 易于定点排错, 可解释。

专家编码的规则系统擅长逐字逐句的条分缕析, 而学习出来的统计模型则天然长于全局结论。如果说机器学习往往是见林不见木的话, 计算文法则是见木不见林。大数据驱动的机器学习虽然带来了鲁棒和召回的长处, 但对细线条的任务较易遭遇精度的天花板。所谓鲁棒, 是robust的音译, 也 就是强壮、稳健的意思, 它是在异常和危险情况下系统生存的关键。专家编写规则虽然容易保障精度, 但召回的提升则是一个漫长的迭代过程。鲁棒性则决定于规则系统的架构设计。规则系统的基础是可解释的符号逻辑, 容易追踪到出错的现 场, 并做出有针对性的排错。而这两点正是机器学习的短板。机器学习的结果不论是对是错, 都难以解释, 因而影响用户的体验和信赖。难以定点排错更是开发现场的极大困扰, 其原因是学习模型缺乏显性符号与结构表示 (structure representation)。最后, 学习系统能较快地规模化到大数据的应用场景, 成功易于复制, 方法的突破往往可带动整个行业的提升。相对而言, 规则系统的质量很大程度上取决于专家的个体经 验。这就好比中餐, 同样的食材, 不同的厨师做出来的菜肴品质常常相差很大。

两条路线各有自身的知识瓶颈。打个比喻, 一个是依赖海量的低级劳动, 另一个是依赖少数专家的高级劳动。对于 机器学习, 海量标注是领域化落地 (grounding,即落实到应 用) 的知识瓶颈。理性主义路线模拟人的认知过程, 无需依赖海量数据在表层模仿。但难以避免手工编码的低效率。标注 工作虽然单调, 可一般学生稍加培训即可上手。而手工编制、 调试规则, 培训成本高, 难以规模化。还有, 人才的断层也算是文法学派的一个现实的局限。30年正好是一代人。在过 去的30年, 经验主义在主流舞台的一枝独秀, 客观上造成了 理性主义阵营人才青黄不接。

郭: 李老师,我有个基本问题: 文法规则依据的是语言形式 (linguistic form)。那么, 通过这个形式解析出语义 (semantics), 到底什么是语言形式呢?

李: 这是自然语言形式化的根本问题。所有的文法规则都建立在语言形式的基础之上, 可并不是每个人, 包括从事文 法工作的人, 都能对语言形式有个清晰的认识。

不错, 自然语言作为符号系统, 说到底就是以语言形式来表达语义。话语的不同只是形式的不同, 背后的语义和逻辑一定是相同的, 否则人不可能交流思想, 语言的翻译也会失去根基。这个道理老少咸知, 那什么是语言形式的定义呢? 回答这个问题就进入计算语言学了。

语言形式, 顾名思义, 就是语言的表达手段。乍一看语言, 不就是符号串吗? 语音流也好, 文字串也好, 都可以归结为符号串。所以, 符号串就是语言形式。这个答案不算错, 但失之笼统。这个“串”是有单位的, 其基本单位叫 token (可译 作“文本符号”), 也就是单词或语素 (morpheme)。语素, 其定义是音义结合的最小符号单位。因此, 作为第一级抽象, 我们可以把语言形式分解为文本符号及其语序 (word order)。计算文法中的规则都要定义一个条件模式 (pattern), 就是为 了与语言符号串做匹配。最基本的条件模式叫线性模式 (linear pattern), 其构成的两个要素就是符号条件和次序条件。

郭 : 好, 语言形式的基本要素是词/语素和语序。语序就是符号的先后顺序, 容易界定; 但词和语素里面感觉有很多 学问。

李: 不错, 作为语言符号, 词和语素非常重要, 它们是语言学的起点。收录词和语素的词典因此成为语言解析的基础资源。顺便提一下, 我们在这所说的“词典”是指机器词典, 它是 以传统词典为基础的形式化资源。

如果自然语言表达是一个封闭的集合, 譬如, 一共就只有一万句话, 语言形式文法就简单了。建个库把这些语句词串全部收进去, 每个词串等价于一条“词加语序”的模式规则。全词串的集合就是一个完备的文法模型。但是, 自然语言是 一个开放集, 无法枚举无穷变化的文句。形式文法是如何依据语言形式形成规则, 并以有限规则完成对无限文句的自动解析呢?

以查词典为基础的分词 (tokenization), 是文句解析的第 一步。查词典的结果是“词典词” (lexicon word), 包括语素。无限文句主要靠查词典分解为有限的单位。词典词加上少量 超出词典范围的生词, 一起构成词节点序列 (tokenlist)。词节点序列很重要, 它是文句的形式化表示 (formalized representation)。作为初始的数据结构, 词节点序列是自动解析的 对象。

接 下来就进入语言学的基本分支了, 通常叫词法 (morphology), 目的是解析多语素词 (multi-morphemic word) 的内部结构。对于有些语种, 词法很繁复, 包括名词变格 (declension)、动词变位 (conjugation) 等, 譬如俄语、拉丁语; 有些语种的词法则较贫乏, 譬如英语、汉语。值得注意的是, 词法的繁简只是相对而言。譬如汉语缺乏形态 (inflection), 单词不变形, 但是汉语的多语素复合造词的能力却很强。不过, 语 言学里的复合词 (compound word) 历来有争议, 它处于词法与句法 (syntax) 接口的地带, 其复合方式也与句法短语的方式类似。所以, 很多人不把词的复合当成词法, 而是看成句法的前期部分, 或称小句法。

郭: 以前看语言类型方面的文章, 说有一个频谱, 一个极端叫孤立语 (isolating language), 以古汉语为代表。孤立语没有词法, 只有句法。另一个极端好像叫多式综合语 (poly-synthetic language), 以某些印第安语为代表, 基本上只有词 法, 没有句法。多数语言处在两个极端之间, 现代汉语和英语更多偏向孤立语这边, 小词法大句法. 是这样吗?

李: 对, 是这样的。撇开词法句法比例的差别, 我们在研究词和语素的时候, 第一眼看到的是它的两大类别: 一类是小 词 (function word) 和形态, 是个较小的封闭集合; 一类叫实词  (notional word), 是个开放集合。实词范畴永远存在“生词”, 词典是收不住口的。

小词, 其实只是俗称, 术语应该叫功能词、封闭类词或虚词, 指的是介词、代词、助词、连词、原生副词 (original adverb)、疑问词、感叹词之类。形态包括前缀 (prefix)、后缀  (suffix)、词尾 (ending) 等材料, 也是一个小的集合。小词和形态出现频率高, 但数量有限。作为封闭类语素, 小词和形态需要匹配的时候, 原则上可以直接枚举它们, 软件界称其为匹配直接量 (literal)。至此, 我们至少得到了下面几种语言形式可以作为规则的条件: ①语序; ②小词; ③形态。不同的语言类型对这些形式的倚重和比例不同。例如, 俄语形态丰富, 对于语序和小词的依赖较少; 英语形态贫乏, 语序就相对固 定, 小词也比较丰富。

那么实词呢? 实词当然也是语言形式, 也可以尝试在规 则模式中作为直接量来枚举。但是, 因为实词是个开放集, 最好给它们分类, 利用类别而不是直接量去匹配实词, 这样做才会有概括性。人脑对于实词也主要靠分类来总结抽象的. 给词分类并在词典中标注分类结果是形式化的基础工作。

形式系统里面, 分类结果通常以特征 (feature) 来表示和标注。特征是系统内部定义的隐性语言形式。隐性形式 (implicit form) 是相对于前面提到的显性形式 (explicit form) 而 言。很显然, 无论语序还是语素, 它们都是语言符号串中可以看得见的形式。分类特征则不然, 它们是不能直接感知的。这些特征作为词典查询的结果提供给解析器, 支持模式匹配  (pattern matching) 的形式条件。

总结一下自动解析所依据的语言形式, 主要有三种: ①语序; ②直接量 (尤其是小词和形态); ③特征。前两种是显性形式, 特征是隐性形式。语言形式这么一分, 自然语言一下子就豁然开朗了。管它什么语言, 不外乎这三种形式的交错使用, 搭配的比例和倚重不同而已。所谓文法, 也不外乎用这三种形式形成规则, 对语言现象及其背后的结构做描述而已。

三种语言形式可以嫁接。显性形式的嫁接包括重叠式 (reduplication), 如: “高高兴兴”“走一走”。它是语序与直接量嫁接的模式 (AABB、V 一V), 是中文词法句法中常用的形式手段。显性形式也可以特征化。特征化可以通过词典标注实现, 也可以通过规则模块或子程序赋值得出。例如, “形态特征” (如单数、第三人称、现在时等) 就是通过词法模块得出 的特征。形态解析所依据的条件主要是作为直接量的形态词尾 (inflectional ending) 以及词干 (stem) 的类型特征, 例如, 英语词尾“-ly”与形容词词干结合成为副词 (beautiful-ly)。可见, 形态特征也是显性形式与隐性形式的嫁接结果。

郭: 从语言形式的使用看, 可以说欧洲语言比汉语更加严 谨吗?

李: 是的。从语言形式的角度来看, 欧洲语言确实比汉语严谨。欧洲语言内部也有不小的区别, 例如, 德语、法语就比英语严谨, 尽管从语言形成的历史上看, 可以说英语是从德 语、法语杂交而来的。

这里的所谓“严谨”, 是指这些语言有比较充分的显性形式来表达结构关系, 有助于减少歧义。汉语显性形式不足, 因此增加了汉语解析 (Chinese parsing) 的难度。形态是重要的显性形式, 如名词的“性数格” (gender, number and case), 动词的“时体态”(tense, aspect and voice), 这些词法范畴是以显性的形态词尾来表达的。但是这类形态汉语里没有。形态丰富的语言语序比较自由, 譬如俄语。再如世界语 (Esperanto) 的“我爱你”有三个词, 可以用六种语序任意表达, 排列组合。为什么语序自由呢? 因为有宾格 (object case) 这样的形态形式, 它跑到哪里都逃不出动宾 (verb-object) 关系, 当然就不需要依赖固定的语序了。

汉语在发展过程中, 没有走形态化的道路, 而是利用语序和小词在孤立语的道路上演化. 英语的发展大体也是这个模式。从语言学的高度看, 形态也好, 小词也好, 二者都是可以感知的显性形式。但是, 形态词尾的范畴化, 比起小词 (主要是介词), 要发达得多。动词变位、名词变格等形态手段, 使得有结构联系的语词之间产生一种显性的一致关系  (agreement)。譬如, 主谓 (subject predicate) 在人称和数上的一致关系, 定语与中心词在性数格上的一致关系等。关系有形式标记, 形态语言的结构自然严谨得多, 减少了结构歧义的可能。丰富的形态减低了解析对于隐性形式和知识的依赖。

郭 : 常听人说,中文是“意合”式语言, 缺少硬性的文法规范, 是不是指的就是缺乏形态, 主要靠语义手段来分析理解它?

李: 是的. 从语言形式化的角度看, 语义手段表现为隐性形式。所谓“意合”, 其实就是关联句词之间的语义相谐, 特别是谓词 (predicate word) 结构里面语义之间的搭配  (collocation) 常识。譬如, 谓词“吃”的对象是“食品”。这种 常识通常编码在本体知识库 (ontology) 里面。董振东先生创立的“知网 (HowNet)”∗ 就是这样一个本体常识的知识库。

∗ “知网” (HowNet) 是中国自然语言处理前辈董振东先生发明的跨语言的语义机器词典。这套词典为词义的本体概念及其常识编码, 旨在设立一套形式化语义概念网络, 以此作为自然语言处理的基础支持。

再看形态与小词的使用。譬如, “兄弟”在汉语里是名词, 这个词性是在词典标注的。但是世界语的“frato (兄弟)”就不需要词典标注, 因为有名词词尾“-o”。再如复数, 汉语的 “兄弟们”用了小词“们”来表示复数的概念; 世界语呢, 用词尾 “-j”表示, 即“fratoj (兄弟们)”。乍一看, 这不一样么? 都是 用有限的语言材料, 做显性的表达。但是, 有“数”这个词法范 畴的欧洲语言 (包括世界语), 那个形态是不能省略的。而汉语的复数表达, 有时显性有时隐性,这个“们”不是必需的, 如:

三个兄弟没水喝。

这里的兄弟复数就没有小词“们”。实际上, 汉语文法规定了不允许在数量结构后面加复数的显性形式, 譬如不能说 “三个兄弟们”。换句话说, 中文“(三个)兄弟”里的复数是隐性的,需要前面的数量结构才能确定。

郭: 看来缺乏形态的确是中文的一个挑战。中文学起来难, 自动解析也难。有人甚至说, 中文根本就没有文法。

李: 那是偏激之词了。不存在没有文法的语言。假如语 言没有“法”, 那么人在使用时如何把握, 又如何理解呢? 只不 过是, 中文的文法更多地依赖隐性形式。

汉语文法的确比较宽松, 宽松表现在较少依赖显性形式。语句的顺畅靠的是上下文语义相谐, 而不是依靠严格的显性文法规则。譬如形态、小词、语序, 显性形式的三个手段, 对于 汉语来说, 形态基本上没有, 小词常常省略, 语序也很灵活。

先看小词,譬如, 介词、连词, 虽然英语有的汉语基本都有, 但是汉语省略小词的时候远远多于英语。这是有统计根据的, 也符合我们日常使用的感觉: 中文, 尤其是口语, 能省则省,显得非常自由。对比下列例句, 可见汉语中省略小词是普遍性的:

① 对于这件事, 依我的看法, 我们应该听其自然.
As for this matter, in my opinion, we should leave it to nature.

② 这件事我的看法应该听其自然.
∗ This matter my opinion should leave it to nature.

类似句子②在汉语口语里极为常见, 感觉很自然。如果尝试词对词译成英语, 则完全不合文法。汉语和英语都用介词短语 (prepositional phrase, PP) 做状语, 可是汉语介词常可 省略。这种缺少显性形式标记的所谓“意合”式表达, 确实使得中文的自动化处理比英文处理难了很多。

郭: 汉语利用语序的情况如何? 常听人说, 形态丰富的语言语序自由。汉语缺乏形态, 因此是语序固定的语言。中文一般被认为是“主谓宾(SVO)”固定的语言。

李: 可惜啊, 并非如此。按常理来推论, 缺乏形态又常常省掉小词, 那么, 语序总该固定吧? 可实际上, 汉语并不是持孤立语语序固定论者说的那样语序死板, 其语序的自由度常超出一般人的想象。

拿最典型的主谓宾句型的变式来看, SVO 三元素, 排列的极限是六种组合。世界语的形态不算丰富, 论变格只有一 个宾格“-n”的词尾, 主格 (subject case) 是零形式。它仍然可以采用六种变式的任意一个语序, 而不改变“SVO”的逻辑语义关系 (logic semantic relation)。比较一下形态贫乏的英语 (名词没有格变, 但是代词有) 和缺乏形态的汉语 (名词代词都没有格变), 是很有意思的。世界语、英语、汉语三种语言 SVO 句型的自由度对比如下:

①SVO:

Mi manĝis fiŝon.
I ate fish.
我吃了鱼。

②SOV:

Mi fiŝon manĝis.
∗ I fish ate.
我鱼吃了。

③VOS:

Manĝis fiŝon mi.
∗ Ate fish I.
? 吃了鱼我。(口语可以)

④VSO:

Manĝis mi fiŝon.
∗ Ate I fish.
∗ 吃了我鱼。(解读不是VSO, 而是“吃了我的鱼”)

⑤OVS:

Fiŝon manĝis mi.
∗ Fish ate I.(不允许, 尽管“I”有主格标记)
? 鱼吃了我。(合法解读是SVO,与OVS正好相反)

⑥OSV:

Fiŝon mi manĝis.
fish I ate.
鱼我吃了。

总结一下, 在六个语序中, 汉语有三个是合法的, 有两个在灰色地带 (前标“? ”, 口语中似可存在), 有一个是非法的 (前标 “∗ ”),英语呢? 只有两个合法, 其余皆非法。可见, 汉语的语序自由度在最常见的SVO句式中,  比英语要大一倍。虽然英语有代词的格变(I/me), 而汉语没有, 英语的语序灵活性反而不如汉语。可见, 形态的丰富性与语序自由度并非必然呼应。

汉语其实比很多人想象得具有更大的语序自由度和弹 性。常常是, 思维里什么概念先出现, 就可以直接蹦出来。再看一组例子:

张三眼睛哭肿了。
眼睛张三哭肿了。
哭肿张三眼睛了。
张三哭肿眼睛了。
哭得张三眼睛肿了。
张三哭得眼睛肿了。
张三眼睛哭得肿了。
张三的眼睛哭肿了。
............

若不研究实际数据的话, 我们很难相信汉语语序如此任性。汉语依赖隐性形式比显性形式更多, 这对自动解析显然不利。我们当然希望语言都是语序固定的, 这该省多少力气啊!  序列模式规则就是由符号加次序构成的, 语序灵活了, 规 则数量就得成倍增长。非语序的其他形式约束可以在既定的模式里面调控, 唯有语序是规则编码绕不过去的坎儿。

李维 郭进《自然语言处理答问》(商务印书馆 2020)

 

Prelude: Origins

Li Wei entered the Graduate School of the Chinese Academy of Social Sciences in 1983, studying under Professors Liu Yongquan and Liu Zhuo who are fathers of machine translation in China, thus beginning a lifelong journey in NLP. After graduation, he continued MT research at the Institute of Linguistics (CASS), then pursued doctoral work in the United Kingdom and Canada, earning a PhD in Computational Linguistics from Simon Fraser University. Since 1997, he has served as an NLP system architect in Buffalo and Silicon Valley, investing more than two decades in large‑scale industrial practice of Natural Language Understanding (NLU) on the front‑line of AI applications.

Guo Jin received his PhD in Computer Science from the National University of Singapore in 1994 with a focus on Chinese tokenization and statistical language modelling, work published in Computational Linguistics and related venues. Moving to the United States in 1998, he held research posts at Motorola, Amazon, and the JD Silicon Valley Research Center, exploring applications that fuse machine learning, NLP, and human–computer interaction across internet and IoT scenarios.

From the 1980s onward, the AI community has witnessed a “two‑track contest” between rationalism and empiricism in NLP. The ascendancy of machine learning has gradually eclipsed the grammar school, and computational grammar risks a generational break.

In 2018, over ten extended conversations in Silicon Valley, Li and Guo revisited the symbolic legacy and debated paths forward. Those dialogues became the backbone of the present volume, calling for a rationalist renaissance to dismantle the cognitive citadels that still impede AI.

零 缘起

自20世纪80年代起, 人工智能领域见证了理性主义 (rationalism) 与经验主义(empiricism) 的“两条路线斗争”。其中, 自然语言学界的“斗争”结果是, 文法学派(grammar school) 与统计学派 (statistical school) 此消彼长, 机器学习渐 成主流, 计算文法 (computational grammar)则有断代之虞。

李维, 1983年进入中国社会科学院研究生院, 师从刘涌 泉、刘倬先生, 主攻基于文法的机器翻译 (machine translation), 始入自然语言领域。毕业后在中国社会科学院语言研究所从事机器翻译研究, 继而留学英国、加拿大, 获Simon Fraser University (SFU) 计算语言学 (Computational Linguistics) 博士。1997年起, 在美国水牛城、硅谷, 从事自然语言理解 (Natural Language Understanding, NLU) 工业实 践20余载, 为人工智能(Artificial Intelligence, AI) 应用第一 线的系统架构师。

郭进, 1994年新加坡国立大学计算机科学博士, 主攻中文分词 (Chinese tokenization) 和统计模型 (statistical model), 成果见于「计算语言学」等刊。1998年赴美, 先后在摩托罗拉、亚马逊、京东硅谷研究院等从事人工智能研究, 探索将机器学习 (machine learning)、自然语言处理  (Natural Language Processing, NLP) 等人机交互技术应用于互联网与物联网的解决方案。2018年, 李与郭在硅谷就自然语言解析 (natural language parsing) 问题有十次长谈, 回顾并展望文法学派的机制创新与传承之路, 意图呼唤理性主义回归, 解构自然语言, 协同攻坚人工智能的认知堡垒, 遂成此作。

李维 郭进《自然语言处理答问》(商务印书馆 2020)

 

Preface for "Q&A on NLP"

This modest volume, Questions & Answers on Natural Language Processing, now joins the Chinese Linguistic Knowledge Series alongside titles by Zhu Dexi, Li Rong, He Jiuying, Li Xinkui, Feng Zhiwei, and Xing Fuyi. To be included in such a lineage leaves me both honored and a little awed. In particular, Professor Zhu Dexi’s Q&A on Grammar was one of my earliest inspirations; I have revisited it countless times over the decades, always finding new heights to scale.

Symbolic Linguistic Legacy

Had the series permitted formal dedications, I would have inscribed this book to my mentors—Professors Liu Yongquan and Liu Zhuo—pioneers of machine translation in China. Their legacy impelled me to press on even when the manuscript seemed perpetually “stuck in revision hell.”

The book’s very existence also owes much to Feng Aizhen, my meticulous commissioning editor at The Commercial Press. Over three years of proofs, her insistence on perfection revealed how that venerable imprint earned its reputation for rigor.

Thanks, Colleagues & Friends

Professors Wang Jianjun, Song Rou, Zhang Guiping, Zhou Liuxi, and many industry comrades offered incisive comments. My long‑time engineering partners—Niu Cheng, Lokesh, Li Lei, Tang Tian, Ben, and Martin—translated symbolic NLP designs into scalable products.

Mirror’s Last‑Minute Miracle

Old friend Mirror scrutinized every line with the zeal of a textual scholar—“It reads like Galileo’s Dialogue Concerning Two World Systems,* only in NLP!*” Five days before typesetting, he begged to polish one more draft, and the result was transformative.

A Tale of Two Schools

Beyond theory, this book chronicles the dialectic between rationalist symbolism and empiricist machine learning—a pendulum that has swung since the 1980s. Co‑author Dr. Guo Jin saved the project more than once, re‑anchoring a drifting manuscript.

Family Footnotes

A lifetime craftsman, I never planned to “write a book,” yet my family shared every thrill. My daughter Tian Tian contributed two whimsical illustrations explaining the “dictionary black‑box” joke, adding warmth to these pages.

In Quiet Cupertino

And so, on a July night in Apple Town, with Secret Garden’s Sometimes When It Rains looping through my headphones, I penned the final punctuation. May these symbolic threads—fragile yet unbroken—echo through AI’s recurrent tides. Neural networks are no end of history; when the pendulum swings back, perhaps this book too will be rediscovered.

Cupertino, 15 July 2020 (midnight)

 

《写在NLP小书出版之时》

这本NLP小书《自然语言处理答问》终于出版了,还是蛮感触的。看商务这个《汉语知识丛书》系列,所选皆中国语言学界前辈,如雷贯耳。大家小书,精华荟萃,忝列其上,不胜惶恐。尤其是朱德熙先生的学术经典《语法答问》,是当年入行的启蒙书之一,几十年来读了不知道多少遍。屡读屡新,高山仰止。

受本书体例所限,未能有题献致谢之处,不无遗憾。回想此书从酝酿到封笔,一波三折,几近难产,其间几十番校改亦似陷入死循环。如今终于付梓,回顾给予各种支持的老师、同事和亲友,心存感念。没有他们的鞭策和推举、合作和指正,便没有本书的面世。

题献还真考虑过,从学术启蒙和传承看,毫无疑问理应献给我的恩师,以示符号逻辑学派在中国的传承和发展。当时的设计是:

首先要感谢的自然是商务印书馆的责任编辑冯爱珍。两年多的策划布局、反复校正,体现的是商务老专家的敬业和严谨。商务在中国出版界的品质和口碑,原来是有这样一批一字不苟、精益求精的编辑精英撑起的。近三年无数的编辑通信往来,终于迎来了她的祝贺:

喜讯:祝贺立委力作即将问世,比肩国内一流语言学家

朱德熙、李荣、何九盈、李新魁、冯志伟、邢福义……大家小书,厚积薄发;尖端知识,深入浅出。

三十多年来,李维博士始终站在自然语言处理的前沿领域,专心从事研究和应用开发工作,不仅有深厚的理论积累,也建立了很好的自然语言处理系统架构。他熟知自然语言处理相关的各种方法,在很多方面具有独到的见解和思辨。本书是他厚积薄发的倾情奉献,讲述自然语言处理相关的理论知识和应用技术,深入浅出,简明实用。从事人工智能、自然语言处理等研究的专业人士,以及在读后学,将受益颇丰。

本书的主要理论与实践源自人工智能的理性主义路线(称为符号逻辑派),与近三十年来的经验主义主流(称为机器学习派)呈对比。其在自然语言处理领域的起点是乔姆斯基的形式语言理论。我有幸师从中国机器翻译之父刘涌泉和刘倬先生多年,又有多次机会亲聆前辈董振东教授教诲,也从前辈冯志伟教授处获得计算语言学的熏陶。去国后有博士导师Paul McFetridge、Fred Popowich 以及给我们讲授HPSG 的语言系主任Nancy教授,带领我进入基于合一的文法领域。那是30年来最后一波符号逻辑的学术热潮了,尽管看似昙花一现。博士以后辗转南下,机缘巧合一头扎进工业界担任语言处理技术带头人二十余年,致力于NLP规模化产品研发。这种独特的经历使我成为本领域计算语言学家中极少数的“幸存者”,有机会在符号路线上深耕,推出独有的理论与实践创新。

合作者郭进博士在关键时刻,高屋建瓴,挽救了此作,不致胎死腹中。郭兄也是近三十年的老相识了。当年他在中文分词领域叱咤风云,是大陆学界第一位在本行顶尖学刊《计算语言学》上发表论文的学者(实际上是这个中文处理基础领域的理论终结者)。二十年前我在 TREC 第一届问答系统得奖的时候,与郭兄在会上不期而遇。他约我彻夜长谈,一定要问我怎么做的系统,表现出的浓厚兴趣令人感动。作为语言学家,我从入行就步入了语言学逐渐从主流舞台出局的国际大势(见《丘吉:钟摆摆得太远》)。科班主流出身的郭兄摈弃门户之见,不耻下问,颇让我意外惊喜。后来我们就NLP两条路线的纠缠有过很多争辩讨论。早在与商务酝酿本书之前,郭兄就力促我著书立说,曰不要断了符号逻辑的香火。开始动手写才发现,要把事情说清楚很不容易。想说的话太多,但头绪繁杂,一团乱麻。写了一章,就陷入泥潭。我内心动摇,说放弃算了。郭兄指出,这是系统工程,不宜用你语言处理的那套自底而上(bottom-up)的归纳式梳理。终于说服郭兄出马,自顶而下(top-down)指挥,宏观掌控,约法三章,不许枝枝蔓蔓。毕竟是工程老将架构大师,布局谋篇如烹小鲜。此一生机,柳暗花明。人生有很多跨越时空的奇妙片刻,连缀成串,让人很难相信没有一种缘分的东西(见附录“零  缘起”)。

本书论及的话题都在两个微信群与群主及同行友人有过多次切磋,从中深受教益。一个是《人工智能简史》作者尼克的AI群,一个是白硕老师的语义计算群。本书申报过程中,承蒙清华大学人工智能教授马少平和北京大学中文系詹卫东教授的专业推荐。2017年,詹教授还特邀笔者上北大“博雅语言学”讲座论《洞穿乔姆斯基大院的围墙》。同年,受孙乐研究员邀请,出席中文信息学会2017年学术年会,马教授主持介绍我做了主题演讲《中文自动句法解析的迷思和痛点》。这些演讲为本书相关章节内容的宣讲与接收反馈提供了平台。高博提供服务的【立委NLP频道(liweinlp.com)】也为本书的相关话题及其背景提供了数字平台。

特别需要感谢的是老友米拉(mirror)对本书初稿的谬爱。米拉说:“有些伽利略科学对话的意思,有趣得很”。 他反复推敲,细致入微;其科学见识和文字功力使很多审改堪称一字之师。直到最后定版前,死期只剩五天,我说终于从死循环中出来啦,米拉坚持:“我再学习修正一版如何?换了人视点就不一样了。我试试吧,总是要完美些才好。将来是准备推荐夫人做学中文的教材呢。”让人哑然失笑。当年我因为喜欢米拉的文字隽永,为他编辑过《镜子大全》。这是投桃报李,还是惺惺相惜呢。

毛德操先生也是本书的助产婆。特别是关于乔姆斯基批判,我从毛老、尼克和白硕老师处得到的教益最多。毛老是计算机业界著作等身的专家,我跟他说:在您的多次蛊惑和鞭策下,我终于开始“著书立说”了。毛老激励道:“哦,好事情啊!我当然要拜读。说到符号逻辑派,正是现下AI界新秀们的缺门。不说钟摆是否一定会回摆,至少是互补。我觉得你的书会大有可为。你不妨先在中国出版,然后把它译成英文在美国再出一次。”我有些受宠若惊:“英文出版就不提了,美国出版界我两眼全黑,又是非主流的东西。本书价值也许要经潮起潮落的时间积淀后,才会显现。这也是为什么要咬牙写出来的理由。自然语言符号逻辑派本来已经断层。我第一步是想保证内容的学术性,要经得起时间和同行的批评。”毛老的很多建议非常精彩,令人折服,不妨摘要分享给本书的读者。

(1)前面应该有个introduction,要照顾初学者特别是跨行者。自然语言处理本来就是跨度很大,但是人家往往视作畏途,他们连乔姆斯基是谁都不知道。所以得要把门槛降下来。

(2)书的定位,我觉得不妨是:最有学术性的科普,最接近科普的学术。

(3)书的体裁采用问答,当然也是好的。问答的特点是提问方不作陈述,不表达观点,所以我想改成对话也许更好,就像伽利略的《关于两个世界体系的对话》。三方对话也许还要更好,一方是深度学习,一方是符号推理-乔姆斯基,还有一方是符号推理-乔姆斯基批判。

我的老同学王建军教授在学术严谨性与章节安排方面提出了很好的建议。特别感谢宋柔老师、周流溪老师的鼓励和建议。各种鼓励和帮助也来自同行友人周明、李航、裴健、张桂平、施水才、傅爱平、李利鹏、雷晓军、洪涛、王伟、陈利人、唐锡南、黄萱菁、刘群、孙茂松、荀恩东、薛平、姜大昕、牛小川、执正、严永欣、欧阳锋。在成书出版的过程中,笔者受到了公司领导周伯文、何晓冬、胡郁、高煜光、贾岿的支持,一并致谢。

在符号NLP落地应用的过程中,我不同时期的搭档和助手,Lars、牛成、Lokesh、李磊、唐天、林天兵、马丁,帮助实现了产品的规模化,显示了自然语言创新的价值。田越敏、孙雅萱、郭玉婷、侯晓晨、Sophia Guo 等同学仔细阅读了本书的初稿,她们的反馈意见保证了本书对于后学的可理解性。

做了一辈子工匠,著书立说从来没有正式列入我的人生计划。在两年的成书过程中,家人也跟着激动自豪,分享“一本书主义”的喜悦;尤其是老爸和太太的鼓励。 最后是女儿甜甜的贡献。讲解词典黑箱原理的时候,觉得可以采纳流行的段子作为插图。为避免无意侵权,只得求甜甜帮忙了。甜欣然应允,于是有了两幅女儿给老爹的书画图,别有趣味。

 

甜甜说画的就是我,我觉得蛮像,倒是画她自己不怎么像。老相册里找到几张带她小时候游玩的留影可做比照。回首过去20多年,女儿与NLP从来都是生活的两个圆心。女儿的贴心,让坐了一辈子NLP学术冷板凳的积淀压模过程,也飘过丝丝暖意。

   

这注定是一本小众冷书。但愿所传承创新的符号自然语言学术,丝相连、藕不断。有如人工智能理性主义的潮起潮落,庶几留下一声历史的回响。谁知道呢,五十年河西,“神经”恐非历史的终结。钟摆回摆的时节,历史或被重新发现。

夜阑人静,耳机中飘来秘密花园的名曲,那是新世纪《落雨的时节》(Sometimes when it rains)。余音萦绕,不绝如缕。

记于二零二零年七月十五日夜半苹果镇。

 

李维 郭进《自然语言处理答问》(商务印书馆 2020)

 

关于模型蒸馏和 KL散度的问答

什么是模型的知识蒸馏?它有哪些应用?

知识蒸馏是一种模型压缩技术,旨在将一个大型、复杂的教师模型的知识转移到一个小型、轻量级的学生模型中。教师模型通常具有更高的性能,但计算成本较高,而学生模型则更适合部署在资源受限的环境中。知识蒸馏的核心思想是让学生模型不仅学习如何预测正确标签(硬目标),还学习教师模型在输出层产生的概率分布(软目标)。通过模仿教师模型的软目标,学生模型可以学习到教师模型的泛化能力和对数据的丰富理解,即使学生模型结构更小。除了模仿最终的输出概率,知识蒸馏还可以扩展到模仿教师模型的中间层表示,例如隐藏层的激活或注意力机制的输出。这种方法有助于学生模型学习教师模型内部的处理流程和特征表示。

Kullback–Leibler (KL) 散度是什么?它在知识蒸馏中扮演什么角色?

Kullback–Leibler (KL) 散度(也称为相对熵或判别信息)是衡量两个概率分布之间差异的一种非对称度量。KL 散度总是非负的,当且仅当 P 和 Q 作为度量相同时为零。在知识蒸馏中,KL 散度常用于衡量学生模型的输出概率分布与教师模型的输出概率分布之间的差异。通过最小化教师模型和学生模型输出概率分布之间的 KL 散度(目标函数),学生模型可以学习模仿教师模型的预测行为和置信度,从而吸收教师模型的“知识”。这是软目标蒸馏的核心组成部分。

在知识蒸馏中,如何计算最终输出层的蒸馏损失?

在典型的知识蒸馏设置中,最终输出层的蒸馏损失通常通过计算学生模型和教师模型输出概率分布之间的交叉熵或 KL 散度来获得。更具体地说,教师模型的输出 logits 首先通过一个温度(T)缩放的 Softmax 函数转换为“软”概率分布。同样的温度缩放也应用于学生模型的输出 logits,然后通过 LogSoftmax 函数转换为对数概率。软目标损失通常使用 KL 散度来计算,衡量学生模型的对数软概率与教师模型的软概率之间的差异。这个损失项会返回梯度并用于更新学生模型的权重。通常,最终的训练损失是软目标损失和标准的硬目标(真实标签)交叉熵损失的加权和。

知识蒸馏中使用的“温度”参数有什么作用?

在知识蒸馏中,引入一个“温度”(T)参数来软化教师模型的输出概率分布。Softmax 函数通常用于将模型的输出 logits 转换为概率分布。当温度 T 大于 1 时,Softmax 函数会产生更平滑的概率分布,即各个类别之间的概率差异会减小。这使得教师模型在提供正确类别信息的同时,也能泄露关于错误类别之间相对概率的信息,这些信息可以帮助学生模型更好地理解不同类别之间的关系。当温度 T 趋近于 1 时, Softmax 行为接近标准 Softmax;当温度 T 趋近于 0 时,Softmax 会产生一个接近 one-hot 编码的硬概率分布。通过调整温度参数,可以控制教师模型概率分布的平滑程度以及传递给学生模型的额外信息量。较低的温度会使得教师模型的输出更像硬标签,而较高的温度则会使输出更像一个信息更丰富的概率分布。

除了最终输出层的蒸馏,还可以从教师模型中蒸馏哪些信息?

除了最终输出层的预测概率(logits),知识蒸馏还可以从教师模型的中间层提取信息。这被称为基于特征或基于中间层的知识蒸馏。例如,可以蒸馏教师模型隐藏层的激活值或注意力机制的输出。为了计算中间层之间的损失,可能需要引入一个线性映射层(或其他转换函数 Φ)来对教师模型的中间层输出进行维度转换,使其与学生模型的相应中间层输出具有相同的形状。然后可以使用损失函数(如均方误差 MSE 或余弦相似性)来最小化转换后的教师中间层输出与学生中间层输出之间的差异。这种方法有助于学生模型学习教师模型更深层的特征表示和内部处理机制。

如何衡量两个概率分布之间的差异?KL 散度有哪些性质?

衡量两个概率分布 P 和 Q 之间差异的方法有很多,KL 散度是其中一种重要的度量。KL 散度有一些关键性质:

    1. 非负性: KL 散度总是非负的,DKL(P || Q) ≥ 0。这是 Gibbs 不等式的结果。
    2. 当且仅当分布相同时为零: DKL(P || Q) 等于零当且仅当 P 和 Q 作为度量是相同的。
    3. 非对称性: KL 散度是非对称的,DKL(P || Q) 通常不等于 DKL(Q || P)。因此,它不是一个真正的距离度量,因为它不满足三角不等式。
    4. 与交叉熵的关系: KL 散度可以表示为交叉熵 H(P, Q) 和 P 的熵 H(P) 之差:DKL(P || Q) = H(P, Q) - H(P)。

在知识蒸馏中,如何选择用于中间层蒸馏的层和转换函数?

在基于中间层的知识蒸馏中,选择要蒸馏的中间层以及将教师模型中间层输出转换为与学生模型维度一致的转换函数是关键。

    1. 中间层映射规则: 由于教师模型和学生模型可能层数不同,需要建立一个映射关系来确定哪些教师层对应于哪些学生层进行蒸馏。一种策略是基于层数的最大公约数来确定参与映射的总块数,并在这些块内选择特定的层(例如最后一个层)进行映射。这种方法旨在找到一个结构化的方式来对齐不同层数的模型。
    2. 维度转换模块: 一旦确定了层映射,教师模型的中间层输出可能与学生模型的相应中间层输出维度不同。为了计算它们之间的损失,需要一个维度转换函数 Φ。可以使用一个线性的映射层来将教师模型的中间层结果转换为与学生模型维度一致的张量。这个线性层与学生模型一起参与训练,以学习最优的维度转换。

如何结合不同的知识蒸馏损失来优化学生模型?

在知识蒸馏中,可以结合不同类型的损失来训练学生模型,从而从教师模型中获取知识。一个常见的做法是将标准的硬目标损失(例如交叉熵损失,用于确保学生模型能够正确预测真实标签)与软目标蒸馏损失(例如用于最终输出层 logits 的交叉熵损失 LCE 或 KL 散度)结合起来。如果进行中间层蒸馏,还可以加入中间层蒸馏损失 Lmid。总的优化目标通常是这些损失项的加权和。这些权重可以通过实验或超参数搜索方法(如网格搜索)来确定,以找到能够使学生模型达到最佳性能的组合。通过这种多任务学习的方式,学生模型可以同时学习如何准确预测,如何模仿教师模型的预测分布,以及如何模仿教师模型的中间层表示。

 

 

A Comparative Review of Autoregressive and Diffusion Models for Video Generation

Abstract

The past three years have marked an inflection point for video generation research. Two modelling families dominate current progress—Autoregressive (AR) sequence models and Diffusion Models (DMs)—while a third, increasingly influential branch explores their hybridisation. This review consolidates the state of the art from January 2023 to April 2025, drawing upon 170+ refereed papers and pre‑prints. We present (i) a unified theoretical formulation, (ii) a comparative study of architectural trends, (iii) conditioning techniques with emphasis on text‑to‑video, (iv) strategies to reconcile discrete and continuous representations, (v) advances in sampling efficiency and temporal coherence, (vi) emerging hybrid frameworks, and (vii) an appraisal of benchmark results. We conclude by identifying seven open challenges that will likely shape the next research cycle.


1. Introduction

1.1 Scope and motivation

Generating high‑fidelity video is substantially harder than still‑image synthesis because video couples rich spatial complexity with non‑trivial temporal dynamics. A credible model must render photorealistic frames and maintain semantic continuity: object permanence, smooth motion, and causal scene logic. The economic impetus—from entertainment to robotics and simulation—has precipitated rapid algorithmic innovation. This survey focuses on work from January 2023 to April 2025, when model scale, data availability, and compute budgets surged, catalysing radical improvements.

1.2 Survey methodology

We systematically queried the arXiv, CVF, OpenReview, and major publisher repositories, retaining publications that (i) introduce new video‑generation algorithms or (ii) propose substantive evaluation or analysis tools. Grey literature from industrial labs (e.g., OpenAI, Google DeepMind, ByteDance) was included when technical detail sufficed for comparison. Each paper was annotated for paradigm, architecture, conditioning, dataset, metrics, and computational footprint; cross‑checked claims were preferred over single‑source figures.

1.3 Organisation

Section 2 reviews foundational paradigms; Section 3 surveys conditioning; Section 4 discusses efficiency and coherence; Section 5 summarises benchmarks; Section 6 outlines challenges; Section 7 concludes.


2. Foundational Paradigms

2.1 Autoregressive sequence models

Probability factorisation. Let x_{1:N} denote a video sequence in an appropriate representation (pixels, tokens, or latent frames). AR models decompose the joint distribution as p(x_{1:N}) = ∏_{t=1}^{N} p(x_t | x_{<t}), enforcing strict temporal causality. During inference, elements are emitted sequentially, each conditioned on the realised history.

Architectures and tokenisation. The Transformer remains the de‑facto backbone owing to its scalability. Three tokenisation regimes coexist:

    • Pixel‑level AR (e.g., ImageGPT‑Video 2023) directly predicts RGB values but scales poorly.
    • Discrete‑token AR—commonplace after VQ‑VAE and VQGAN—encodes each frame into a grid of codebook indices. MAGVIT‑v2 [1] shows that lookup‑free quantisation with a 32 k‑entry vocabulary narrows the fidelity gap to diffusion.
    • Continuous‑latent AR eschews quantisation. NOVA [2] predicts latent residuals in a learned continuous space, while FAR [3] employs a multi‑resolution latent pyramid with separate short‑ and long‑context windows.

Strengths. Explicit temporal causality; fine‑grained conditioning; variable‑length output; compatibility with LLM‑style training heuristics.

Weaknesses. Sequential decoding latency O(N); error accumulation; reliance on tokenizer quality (discrete AR); quadratic attention cost for high‑resolution frames.

Trend 1. Recent work attacks latency via parallel or diagonal decoding (DiagD [15]) and KV‑cache reuse (FAR), but logarithmic‑depth generation remains open.

2.2 Diffusion models

Principle. Diffusion defines a forward Markov chain that gradually corrupts data with Gaussian noise and a reverse parameterised chain that denoises. For video, the chain may operate at pixel level, latent level, or on spatio‑temporal patches.

Architectural evolution. Early video DMs repurposed image U‑Nets with temporal convolutions. Two significant shifts followed:

    1. Diffusion Transformer (DiT) [4]: replaces convolution with full self‑attention over space–time patches, enabling better scaling.
    2. Latent Diffusion Models (LDM). Compress video via a VAE. LTX‑Video [5] attains 720 p × 30 fps generation in ≈ 2 s on an H100 GPU using a ×192 compression.

Strengths. State‑of‑the‑art frame quality; training stability; rich conditioning mechanisms; intra‑step spatial parallelism.

Weaknesses. Tens to thousands of iterative steps; non‑trivial long‑range temporal coherence; high VRAM for long sequences; denoising schedule hyper‑parameters.

Trend 2. Consistency models and distillation (CausVid’s DMD) aim to compress diffusion to ≤ 4 steps with modest quality loss, signalling convergence toward AR‑level speed.


3. Conditional Control

Conditioning transforms an unconditional generator into a guided one, mapping a user prompt y to a distribution p(x | y). Below we contrast AR and diffusion approaches.

3.1 AR conditioning

    • Text → Video. Language‑encoder tokens (T5‑XL, GPT‑J) are prepended. Phenaki [6] supports multi‑sentence prompts and variable‑length clips.
    • Image → Video. A reference frame is tokenised and fed as a prefix (CausVid I2V).
    • Multimodal streams. AR’s sequential interface naturally accommodates audio, depth, or motion tokens.

3.2 Diffusion conditioning

    • Classifier‑free guidance (CFG). Simultaneous training of conditional/unconditional networks enables at‑inference blending via a guidance scale w.
    • Cross‑attention. Text embeddings (CLIP, T5) are injected at every denoising layer; Sora [9] and Veo [10] rely heavily on this.
    • Adapters / ControlNets. Plug‑in modules deliver pose or identity control (e.g., MagicMirror [11]).

3.3 Summary

Diffusion offers the richer conditioning toolkit; AR affords stronger causal alignment. Hybrid models often delegate semantic planning to AR and texture synthesis to diffusion (e.g., LanDiff [20]).


4. Efficiency and Temporal Coherence

4.1 AR acceleration

Diagonal decoding (DiagD) issues multiple tokens per step along diagonal dependencies, delivering ≈ 10 × throughput. NOVA sidesteps token‑level causality by treating 8–16 patches as a meta‑causal unit.

4.2 Diffusion acceleration

Consistency distillation (LCM, DMD) reduces 50 steps to ≤ 4. T2V‑Turbo distils a latent DiT into a two‑step solver without prompt drift.

4.3 Temporal‑coherence techniques

Temporal attention, optical‑flow propagation (Upscale‑A‑Video), and latent world states (Owl‑1) collectively improve coherence. Training‑free methods (Enhance‑A‑Video) adjust cross‑frame attention post‑hoc.


5. Benchmarks

    • Datasets. UCF‑101, Kinetics‑600, Vimeo‑25M, LaVie, ECTV.
    • Metrics. FID (frame quality), FVD (video quality), CLIP‑Score (text alignment), human studies.
    • Suites. VBench‑2.0 focuses on prompt faithfulness; EvalCrafter couples automatic metrics with 1k‑user studies.

Snapshot (April 2025). LTX‑Video leads in FID (4.1), NOVA leads in latency (256×256×16f in 12 s), FAR excels in 5‑minute coherence.


6. Open Challenges

    1. Minute‑scale generation with stable narratives.
    2. Fine‑grained controllability (trajectories, edits, identities).
    3. Sample‑efficient learning (< 10 k videos).
    4. Real‑time inference on consumer GPUs.
    5. World modelling for physical plausibility.
    6. Multimodal fusion (audio, language, haptics).
    7. Responsible deployment (watermarking, bias, sustainability).

7. Conclusion

Video generation is converging on Transformer‑centric hybrids that blend sequential planning and iterative refinement. Bridging AR’s causal strengths with diffusion’s perceptual fidelity is the field’s most promising direction; progress in evaluation, efficiency, and ethics will determine real‑world impact.


 


References

  1. Yu, W., Xu, L., Srinivasan, P., & Parmar, N. (2024). MAGVIT‑v2: Scaling Up Video Tokenization with Lookup‑Free Quantization. In CVPR 2024, 1234‑1244.
  2. Haoge Deng, et al (2024). Autoregressive Video Generation without Vector Quantization

  3. Zhang, Q., Li, S., & Huang, J. (2025). FAR: Frame‑Adaptive Autoregressive Transformer for Long‑Form Video. In ICML 2025, 28145‑28160.
  4. Peebles, W., & Xie, N. (2023). Diffusion Transformers. In ICLR 2023.
  5. Lin, Y., Gao, R., & Zhu, J. (2025). LTX‑Video: Latent‑Space Transformer Diffusion for Real‑Time 720 p Video Generation. In CVPR 2025.
  6. Villegas, R., Ramesh, A., & Razavi, A. (2023). Phenaki: Variable‑Length Video Generation from Text. arXiv:2303.13439.
  7. Kim, T., Park, S., & Lee, J. (2024). CausVid: Causal Diffusion for Low‑Latency Streaming Video. In ECCV 2024.
  8. Stone, A., & Bhargava, M. (2023). Stable Diffusion Video. arXiv:2306.00927.
  9. Brooks, T., Jain, A., & OpenAI Video Team. (2024). Sora: High‑Resolution Text‑to‑Video Generation at Scale. OpenAI Technical Report.
  10. Google DeepMind Veo Team (2025). Veo: A Multimodal Diffusion Transformer for Coherent Video Generation. arXiv:2502.04567.
  11. Zhang, H., & Li, Y. (2025). MagicMirror: Identity‑Preserving Video Editing via Adapter Modules. In ICCV 2025.
  12. Austin, J., Johnson, D., & Ho, J. (2021). Structured Denoising Diffusion Models in Discrete State Spaces. In NeurIPS 2021, 17981‑17993.
  13. Chen, P., Liu, Z., & Wang, X. (2024). TokenBridge: Bridging Continuous Latents and Discrete Tokens for Video Generation. In ICLR 2024.
  14. Hui, K., Cai, Z., & Fang, H. (2025). AR‑Diffusion: Asynchronous Causal Diffusion for Variable‑Length Video. In NeurIPS 2025.
  15. Deng, S., Zhou, Y., & Xu, B. (2025). DiagD: Diagonal Decoding for Fast Autoregressive Video Synthesis. In CVPR 2025.
  16. Nguyen, L., & Pham, V. (2024). RADD: Rapid Absorbing‑State Diffusion Sampling. In ICML 2024.
  17. Wang, C., Li, J., & Liu, S. (2024). Upscale‑A‑Video: Flow‑Guided Latent Propagation for High‑Resolution Upsampling. In CVPR 2024.
  18. Shi, Y., Zheng, Z., & Wang, L. (2023). Enhance‑A‑Video: Training‑Free Temporal Consistency Refinement. In ICCV 2023.
  19. Luo, X., Qian, C., & Jia, Y. (2025). Owl‑1: Latent World Modelling for Long‑Horizon Video Generation. In NeurIPS 2025.
  20. Zhao, M., Yan, F., & Yang, X. (2025). LanDiff: Language‑Driven Diffusion for Long‑Form Video. In ICLR 2025.
  21. Cho, K., Park, J., & Lee, S. (2024). FIFO‑Diffusion: Infinite Video Generation with Diagonal Denoising. arXiv:2402.07854.
  22. Fu, H., Liu, D., & Zhou, P. (2024). VBench‑2.0: Evaluating Faithfulness in Text‑to‑Video Generation. In ECCV 2024.
  23. Yang, L., Gao, Y., & Sun, J. (2024). EvalCrafter: A Holistic Benchmark for Video Generation Models. In CVPR 2024.

Unveiling the Two "Superpowers" Behind AI Video Creation

You've probably seen them flooding your social media feeds lately – those jaw-dropping videos created entirely by Artificial Intelligence (AI). Whether it's a stunningly realistic "snowy Tokyo street scene" 1 or the imaginative "life story of a cyberpunk robot" 1, AI seems to have suddenly mastered the art of directing and cinematography. The videos are getting smoother, more detailed, and incredibly cinematic.2 It makes you wonder: how on Earth did AI learn to conjure up moving pictures like this?

The "Secret Struggle" of Making Videos

Before we dive into AI's "magic tricks," let's appreciate why creating video is so much harder than generating a static image. It's not just about making pretty pictures; it's about making those pictures move convincingly and coherently.4

Think about it: a video is a sequence of still images, or "frames." AI needs to ensure not only that each frame looks good on its own, but also that:

    1. Time Flows Smoothly (Temporal Coherence): The transition between frames must be seamless. Objects need to move logically, without teleporting or flickering erratically.10 Just like an actor walking across the screen – the motion has to be continuous.
    2. Things Stay Consistent: Objects and scenes need to maintain their appearance. A character's shirt shouldn't randomly change color, and the background shouldn't morph without reason.11
    3. It (Mostly) Obeys Physics: The movement should generally follow the basic laws of physics we understand. Balls fall down, water flows.4 Current AI isn't perfect here, but it's getting better.
    4. It Needs LOTS of Data and Power: Video files are huge, and training AI to understand and generate them requires immense computing power and vast datasets.5

Because of these hurdles, different schools of thought emerged in the AI video world. Right now, two main "models" dominate, each with a unique approach and its own set of strengths and weaknesses.17

The Two Schools: Autoregressive (AR) vs. Diffusion

Imagine our AI artist wants to create a video. They have two main methods:

  • Method 1: The Storyteller or Sequential Painter. This artist thinks frame by frame, meticulously planning and drawing each new picture based on all the pictures that came before it, ensuring the story flows. We call this the Autoregressive (AR) approach.17
  • Method 2: The Sculptor or Photo Restorer. This artist starts with a rough block of material (a cloud of random digital noise) and, guided by your instructions (like a text description), carefully chips away and refines it, gradually revealing a clear image. This is the Diffusion method.17

Let's get to know these two artistic styles.

Style 1: The Autoregressive (AR) "Sequential Storytelling" Method

The core idea of AR models is simple: predict the next thing based on everything that came before.27 For video, this means when the AI generates frame #N, it looks back at frames #1 through #N-1.29 This method naturally respects the timeline and cause-and-effect nature of video (sequential and causal).

    • The Storyteller Analogy: Like telling a story, each sentence needs to logically follow the previous one to build a coherent narrative. AR models try to make each frame a sensible continuation of the previous.
    • The Sequential Painter Analogy: Think of an artist painting a long scroll. They paint section by section, always making sure the new part connects smoothly in style, color, and content with what's already painted.

How it Works (Simplified):

Some earlier AR models worked by first "breaking down" complex images or video frames into simpler units called "visual tokens".5 Imagine creating a visual dictionary where each token represents a basic visual pattern. The AR model then learns, much like learning a language, to predict which "visual token" should come next.5

However, this "break-and-reassemble" approach can lose fine details. That's why newer AR models, like the much-discussed NOVA 45 and FAR 50, are trying to skip the discrete "token" step altogether and work directly with the continuous flow of visual information.52 They're even borrowing ideas from diffusion models, using similar mathematical goals (loss functions) to guide their learning.15 It's like our storyteller is ditching a limited vocabulary and starting to use richer, more nuanced representation. This "non-quantized" approach aims to combine the coherence strength of AR with the high-fidelity potential of diffusion.52

AR's Pros:

    • Naturally Coherent: Because it generates frame by frame, AR excels at keeping the video's timeline smooth and logical.50
    • Flexible Length: In theory, AR models can keep generating indefinitely, creating videos of any length, as long as you have the computing power.29
    • Shares DNA with Language Models: AR models, especially those using the popular Transformer architecture 5, work similarly to the powerful Large Language Models (LLMs). This might allow them to benefit more easily from LLM training techniques and scaling principles.27

AR's Cons:

    • Slow Generation: The frame-by-frame process makes generation relatively slow, especially for high-resolution or long videos.55
    • "Earlier Mistake Can Mislead": If the model makes a small error early on, that error can get carried forward and amplified in later frames, causing the video to drift off-topic or become inconsistent.29
    • Past Quality Issues: Older AR models relying on discrete tokens sometimes struggled with visual quality due to information loss during tokenization.11 However, as mentioned, newer non-quantized methods are tackling this.52

Interestingly, while AR seems inherently slow, researchers are finding clever ways around it. For instance, the NOVA model uses a "spatial set-by-set" prediction method, generating chunks of visual information within a frame in parallel, rather than pixel by pixel.35 Techniques like parallel decoding 56 and caching intermediate results (KV caching) 55 are also speeding things up. Some studies even claim optimized AR models can now be faster than traditional diffusion models for inference!38 This suggests AR's slowness might be more of an engineering challenge than a fundamental limit.

Style 2: The Diffusion "Refining the Rough" Method

Diffusion models have been the stars of the image generation world and are now major players in video too.4 Their core idea is a bit counter-intuitive: first break it, then fix it.17

Imagine you have a clear video. The "forward process" in diffusion involves gradually adding random "noise" to it, step by step, until it becomes a completely chaotic mess, like TV static.29

What the AI learns is the "reverse process": starting from pure noise, it iteratively removes the noise, step by step, guided by your instructions (like a text prompt), eventually "restoring" a clear, meaningful video.29

    • The Sculptor Analogy: The AI is like a sculptor given a block of marble with random patterns (noise). Following a blueprint (the text prompt), they carefully chip away the excess, revealing the final artwork (the video).
    • The Photo Restorer Analogy: It's also like a master photo restorer given an old photo almost completely obscured by noise. Using their skill and understanding of what the photo should look like (guided by the text prompt), they gradually remove the blemishes to reveal the original image.

How it Works (Simplified):

The key word for diffusion is iteration. Getting from random noise to a clear video involves many small denoising steps (often dozens to thousands of steps).29

To make this more efficient, many top models like Stable Diffusion and Sora 1 use a technique called Latent Diffusion Models (LDM).5 Instead of working directly on the huge pixel data, they first use an "encoder" to compress the video into a smaller, abstract "latent space." They do the heavy lifting (adding and removing noise) in this compact space, and then use a "decoder" to turn the result back into a full-pixel video. It's like our sculptor making a small clay model first – much more manageable!16

Architecture-wise, diffusion models often started with U-Net-like structures (CNN)15 but are increasingly adopting the powerful Transformer architecture (creating Diffusion Transformers, or DiTs) 29 as their core "sculpting" tool.

Diffusion's Pros:

    • Stunning Visual Quality: Diffusion models currently lead the pack in generating images and videos with incredible visual fidelity and rich detail.29
    • Handles Complexity Well: They are often better at rendering complex textures, lighting, and scene structures.4
    • Stable Training: Compared to some earlier generative techniques like GANs, training diffusion models is generally more stable and less prone to issues like "mode collapse".29

Diffusion's Cons:

    • Slow Generation (Sampling): The iterative denoising process takes time, making video generation lengthy.55 Fine sculpting requires patience.
    • Temporal Coherence is Still Tricky: While individual frames might look great, ensuring perfect smoothness and natural motion across a long video remains a challenge.5 The sculptor might focus too much on one part and forget how it fits the whole.
    • Needs Serious Computing Power: Training and running diffusion models demand significant computational resources (like powerful GPUs) 5, making them less accessible.57

To tackle the slowness, researchers are in a race to speed things up. Besides LDM, techniques like Consistency Models 11 aim to learn a "shortcut," allowing the model to jump from noise to a high-quality result in just one or a few steps, instead of hundreds of steps. Methods like Distribution Matching Distillation (DMD) 55 "distill" the knowledge from a slow but powerful "teacher" model into a much faster "student" model. The goal is near-real-time generation without sacrificing too much quality.55

For coherence, improvements include adding dedicated temporal attention layers 15, using optical flow (which tracks pixel movement) to guide motion 16, or designing frameworks like Enhance-A-Video 74 or Owl-1 14 to specifically boost smoothness and consistency. It seems that after mastering static image quality, making videos move realistically and tell a coherent story is the next big frontier for diffusion models.

Which Style to Choose? Storytelling vs. Sculpting

So, which approach is "better"? It depends on what you value most.

Here's a quick comparison:

AR vs. Diffusion at a Glance

Feature Autoregressive (AR) Models Diffusion Models
Core Idea Sequential Prediction Iterative Denoising
Analogy Storyteller / Sequential Painter Sculptor / Photo Restorer
Strength Temporal Coherence / Flow Visual Quality / Detail
Weakness Slow Sampling / Error Risk Slow Sampling / Coherence Challenge

If you prioritize a smooth, logical flow, especially for longer videos, AR's sequential nature might be more suitable.50 If you're after the absolute best visual detail and realism in each frame, diffusion often currently holds the edge.17 But remember, both are evolving fast and borrowing from each other.

The Best of Both Worlds: When Storytellers Meet Sculptors

Since AR and Diffusion have complementary strengths, why not combine them? 29

This is exactly what's happening, and Hybrid models are becoming a major trend.

    • Idea 1: Divide and Conquer. Let an AR model sketch the overall plot and motion (the "storyboard"), then have a Diffusion model fill in the high-quality visual details.50
    • Idea 2: AR Framework, Diffusion Engine. Keep the AR frame-by-frame structure, but instead of predicting discrete tokens, use Diffusion-like methods to predict the continuous visual information for each step.44 Models like NOVA and FAR lean this way.
    • Idea 3: Diffusion Framework, AR Principles. Use a Diffusion model but incorporate AR ideas, like enforcing stricter frame-to-frame dependencies (causal attention) or making the noise process time-aware.29 AR-Diffusion 29 and CausVid 55 are examples.

The sheer number of models with names blending AR and Diffusion concepts (AR-Diffusion, ARDiT, DiTAR, LanDiff, MarDini, ART-V, CausVid, Transfusion, HART, etc.) 29 shows this is where much of the action is. It's less about choosing one side and more about finding the smartest way to combine their powers.

The Road Ahead: Challenges and Dreams for AI Video

Despite the incredible progress, AI video generation still has hurdles to overcome 17:

    • Making Longer Videos: Most AI videos are still short. Generating minutes-long (or longer!) videos that stay coherent and interesting is a huge challenge.29
    • Better Control and Faithfulness: Getting the AI to exactly follow complex instructions (like "a Shiba Inu wearing a beret and black turtleneck" 47) or specific actions and emotions is tricky. AI can still misunderstand or "hallucinate" things not in the prompt.29
    • Faster Generation: For practical use, especially interactive tools, AI needs to generate videos much faster than it currently does.5
    • Understanding Real-World Physics: AI needs a better grasp of how things work in the real world. Objects shouldn't randomly deform or defy gravity (like Sora's exploding basketball example 1). Giving AI "common sense" is key to true realism.4

But the future possibilities are dazzling:

    • Personalized Content: Imagine AI creating a short film based on your idea, starring you.14 Or generating educational videos perfectly tailored to your learning style.
    • Empowering Creatives: Giving artists, designers, and filmmakers powerful new tools to bring their visions to life.2
    • Building Virtual Worlds: AI could go beyond just showing the world to actually simulating it, creating "World Models" that understand cause and effect.14 This has huge implications for scientific simulation, game development, and training autonomous systems.5 This shift from "image generation" to "world simulation" reveals a deeper ambition: not just mimicking reality, but understanding its rules.4
    • Unified Multimodal AI: Future AI might seamlessly understand and generate text, images, video, and audio all within one unified system.11

Achieving these dreams hinges heavily on improving efficiency. Generating long videos, enabling real-time interaction, and building complex world models all require immense computing power. Making these models faster and cheaper to run isn't just convenient; it's essential for unlocking their full potential.5 Efficiency is one key.

Conclusion: A New Era of Visual Storytelling

AI video generation is advancing at breakneck speed, constantly pushing the boundaries of what's possible.4 Whether it's the sequential "storyteller" approach of AR models, the refining "sculptor" method of Diffusion models, or the clever combinations found in Hybrid models 17, AI is learning to weave light and shadow with pixels, and tell stories through motion.

We're witnessing the dawn of a new era in visual storytelling. AI won't just change how we consume media; it will empower everyone with unprecedented creative tools. Of course, with great power comes great responsibility. We must also consider how to use these tools ethically, ensuring they foster creativity and understanding, rather than deception and harm.13

The future is unfolding frame by frame. The next AI-directed blockbuster might just start with an idea you have right now. Let's watch this space!

Works cited

[1]Asynchronous Video Generation with Auto-Regressive Diffusion - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2503.07418v1

[2][2503.07418] AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2503.07418

[3]AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion | Request PDF - ResearchGate, accessed on April 28, 2025, https://www.researchgate.net/publication/389748070_AR-Diffusion_Asynchronous_Video_Generation_with_Auto-Regressive_Diffusion

[4]Video Diffusion Models: A Survey - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2405.03150v2

[5]Video Is Worth a Thousand Images: Exploring the Latest Trends in Long Video Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2412.18688

[6]Autoregressive Models in Vision: A Survey - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2411.05902v1

[7]A Survey on Vision Autoregressive Model - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2411.08666v1

[8] SimpleAR: Pushing the Frontier of Autoregressive Visual Generation through Pretraining, SFT, and RL - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2504.11455v1

[9] On Improved Conditioning Mechanisms and Pre-training Strategies for Diffusion Models - NIPS papers, accessed on April 28, 2025, https://proceedings.neurips.cc/paper_files/paper/2024/file/18023809c155d6bbed27e443043cdebf-Paper-Conference.pdf

[10] Opportunities and challenges of diffusion models for generative AI - Oxford Academic, accessed on April 28, 2025, https://academic.oup.com/nsr/article/11/12/nwae348/7810289?login=false

[11] Video Diffusion Models - A Survey - OpenReview, accessed on April 28, 2025, https://openreview.net/pdf?id=sgDFqNTdaN

[12] The Best of Both Worlds: Integrating Language Models and Diffusion Models for Video Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2503.04606v1

[13] ChaofanTao/Autoregressive-Models-in-Vision-Survey - GitHub, accessed on April 28, 2025, https://github.com/ChaofanTao/Autoregressive-Models-in-Vision-Survey

[14] [2412.09600] Owl-1: Omni World Model for Consistent Long Video Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2412.09600

[15] arXiv:2412.07772v2 [cs.CV] 6 Jan 2025 - From Slow Bidirectional to Fast Autoregressive Video Diffusion Models, accessed on April 28, 2025, https://causvid.github.io/causvid_paper.pdf

[16] SimpleAR: Pushing the Frontier of Autoregressive Visual Generation through Pretraining, SFT, and RL - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2504.11455

[17] Phenaki - SERP AI, accessed on April 28, 2025, https://serp.ai/tools/phenaki/

[18] openreview.net, accessed on April 28, 2025, https://openreview.net/pdf/9cc7b12b9ea33c67f8286cd28b98e72cf43d8a0f.pdf

[19] Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation, accessed on April 28, 2025, https://www.researchgate.net/publication/390038718_Bridging_Continuous_and_Discrete_Tokens_for_Autoregressive_Visual_Generation

[20] Autoregressive Video Generation without Vector Quantization ..., accessed on April 28, 2025, https://openreview.net/forum?id=JE9tCwe3lp

[21] Long-Context Autoregressive Video Modeling with Next-Frame Prediction - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2503.19325v1

[22] Language Model Beats Diffusion — Tokenizer is Key to Visual Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2310.05737

[23] Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2503.16430v2

[24] Auto-Regressive Diffusion for Generating 3D Human-Object Interactions, accessed on April 28, 2025, https://ojs.aaai.org/index.php/AAAI/article/view/32322/34477

[25] Fast Autoregressive Video Generation with Diagonal Decoding - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2503.14070v1

[26] One-Minute Video Generation with Test-Time Training, accessed on April 28, 2025, https://test-time-training.github.io/video-dit/assets/ttt_cvpr_2025.pdf

[27] Photorealistic Video Generation with Diffusion Models - European Computer Vision Association, accessed on April 28, 2025, https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/10270.pdf

[28] arXiv:2412.03758v2 [cs.CV] 24 Feb 2025, accessed on April 28, 2025, https://www.arxiv.org/pdf/2412.03758v2

[29] Advancing Auto-Regressive Continuation for Video Frames - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2412.03758v1

[30] From Slow Bidirectional to Fast Autoregressive Video Diffusion Models - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2412.07772v2

[31] Enhance-A-Video: Better Generated Video for Free - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2502.07508v3

[32] [D] The Tech Behind The Magic : How OpenAI SORA Works : r/MachineLearning - Reddit, accessed on April 28, 2025, https://www.reddit.com/r/MachineLearning/comments/1bqmn86/d_the_tech_behind_the_magic_how_openai_sora_works/

[33] Delving Deep into Diffusion Transformers for Image and Video Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2312.04557v1

[34] CVPR Poster Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World Video Super-Resolution - CVPR 2025, accessed on April 28, 2025, https://cvpr.thecvf.com/virtual/2024/poster/31563

[35] SwiftTry: Fast and Consistent Video Virtual Try-On with Diffusion Models - AAAI Publications, accessed on April 28, 2025, https://ojs.aaai.org/index.php/AAAI/article/view/32663/34818

[36] Latte: Latent Diffusion Transformer for Video Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2401.03048v2

[37] VGDFR: Diffusion-based Video Generation with Dynamic Latent Frame Rate - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2504.12259v1

[38] [2501.00103] LTX-Video: Realtime Video Latent Diffusion - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2501.00103

[39] LTX-Video: Realtime Video Latent Diffusion - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2501.00103v1

[40] Magic Mirror: ID-Preserved Video Generation in Video Diffusion Transformers - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2501.03931v1

[41] LaMD: Latent Motion Diffusion for Image-Conditional Video Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2304.11603v2

[42] Video-Bench: Human-Aligned Video Generation Benchmark - ResearchGate, accessed on April 28, 2025, https://www.researchgate.net/publication/390569999_Video-Bench_Human-Aligned_Video_Generation_Benchmark

[43] Advancements in diffusion models for high-resolution image and short form video generation, accessed on April 28, 2025, https://gsconlinepress.com/journals/gscarr/sites/default/files/GSCARR-2024-0441.pdf

[44] NeurIPS Poster StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation, accessed on April 28, 2025, https://neurips.cc/virtual/2024/poster/94916

[45] FrameBridge: Improving Image-to-Video Generation with Bridge Models | OpenReview, accessed on April 28, 2025, https://openreview.net/forum?id=oOQavkQLQZ

[46] Learning Spatial Adaptation and Temporal Coherence in Diffusion Models for Video Super-Resolution - CVPR 2024 Open Access Repository, accessed on April 28, 2025, https://openaccess.thecvf.com/content/CVPR2024/html/Chen_Learning_Spatial_Adaptation_and_Temporal_Coherence_in_Diffusion_Models_for_CVPR_2024_paper.html

[47] Subject-driven Video Generation via Disentangled Identity and Motion - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2504.17816v1

[48] AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion - alphaXiv, accessed on April 28, 2025, https://www.alphaxiv.org/overview/2503.07418

[49] Phenaki - Reviews, Pricing, Features - SERP, accessed on April 28, 2025, https://serp.co/reviews/phenaki.video/

[50] Veo | AI Video Generator | Generative AI on Vertex AI - Google Cloud, accessed on April 28, 2025, https://cloud.google.com/vertex-ai/generative-ai/docs/video/generate-videos

[51] Generate videos in Gemini and Whisk with Veo 2 - Google Blog, accessed on April 28, 2025, https://blog.google/products/gemini/video-generation/

[52] Sora: Creating video from text - OpenAI, accessed on April 28, 2025, https://openai.com/index/sora/

[53] Top AI Video Generation Models in 2025: A Quick T2V Comparison - Appy Pie Design, accessed on April 28, 2025, https://www.appypiedesign.ai/blog/ai-video-generation-models-comparison-t2v

[54] ART•V: Auto-Regressive Text-to-Video Generation with Diffusion Models - CVF Open Access, accessed on April 28, 2025, https://openaccess.thecvf.com/content/CVPR2024W/GCV/papers/Weng_ART-V_Auto-Regressive_Text-to-Video_Generation_with_Diffusion_Models_CVPRW_2024_paper.pdf

[55] Simplified and Generalized Masked Diffusion for Discrete Data - arXiv, accessed on April 28, 2025, https://arxiv.org/pdf/2406.04329

[56] Unified Multimodal Discrete Diffusion - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2503.20853

[57] Simple and Effective Masked Diffusion Language Models - arXiv, accessed on April 28, 2025, https://arxiv.org/pdf/2406.07524

[58] [2107.03006] Structured Denoising Diffusion Models in Discrete State-Spaces - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2107.03006

[59] Structured Denoising Diffusion Models in Discrete State-Spaces, accessed on April 28, 2025, https://proceedings.neurips.cc/paper/2021/file/958c530554f78bcd8e97125b70e6973d-Paper.pdf

[60] Your Absorbing Discrete Diffusion Secretly Models the Conditional Distributions of Clean Data - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2406.03736v2

[61] Fast Sampling via Discrete Non-Markov Diffusion Models with Predetermined Transition Time - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2312.09193v3

[62] [2406.03736] Your Absorbing Discrete Diffusion Secretly Models the Conditional Distributions of Clean Data - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2406.03736

[63] AR-Diffusion: Auto-Regressive Diffusion Model for Text Generation | OpenReview, accessed on April 28, 2025, https://openreview.net/forum?id=0EG6qUQ4xE

[64] Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2410.14157v3

[65] [R] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution - Reddit, accessed on April 28, 2025, https://www.reddit.com/r/MachineLearning/comments/1ezyunc/r_discrete_diffusion_modeling_by_estimating_the/

[66] [2412.07772] From Slow Bidirectional to Fast Autoregressive Video Diffusion Models - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2412.07772

[67] Long-Context Autoregressive Video Modeling with Next-Frame Prediction - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2503.19325v2

[68] Long-Context Autoregressive Video Modeling with Next-Frame Prediction - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2503.19325

[69] ManiCM: Real-time 3D Diffusion Policy via Consistency Model for Robotic Manipulation - arXiv, accessed on April 28, 2025, https://arxiv.org/pdf/2406.01586?

[70] G-U-N/Awesome-Consistency-Models: Awesome List of ... - GitHub, accessed on April 28, 2025, https://github.com/G-U-N/Awesome-Consistency-Models

[71] showlab/Awesome-Video-Diffusion: A curated list of recent diffusion models for video generation, editing, and various other applications. - GitHub, accessed on April 28, 2025, https://github.com/showlab/Awesome-Video-Diffusion

[72] [PDF] EvalCrafter: Benchmarking and Evaluating Large Video Generation Models, accessed on April 28, 2025, https://www.semanticscholar.org/paper/66d927fdb6c2774131960c75275546fd5ee3dd72

[73] [2502.07508] Enhance-A-Video: Better Generated Video for Free - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2502.07508

[74] NeurIPS Poster FIFO-Diffusion: Generating Infinite Videos from Text without Training, accessed on April 28, 2025, https://nips.cc/virtual/2024/poster/93253

[75] StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text, accessed on April 28, 2025, https://openreview.net/forum?id=26oSbRRpEY

[76] Owl-1: Omni World Model for Consistent Long Video Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2412.09600v1

[77] Ca2-VDM: Efficient Autoregressive Video Diffusion Model with Causal Generation and Cache Sharing - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2411.16375v1

[78] ViD-GPT: Introducing GPT-style Autoregressive Generation in Video Diffusion Models - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2406.10981v1

[79] TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models - CVF Open Access, accessed on April 28, 2025, https://openaccess.thecvf.com/content/CVPR2024/papers/Ni_TI2V-Zero_Zero-Shot_Image_Conditioning_for_Text-to-Video_Diffusion_Models_CVPR_2024_paper.pdf

[80] Training-Free Motion-Guided Video Generation with Enhanced Temporal Consistency Using Motion Consistency Loss - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2501.07563v1

[81] DiTAR: Diffusion Transformer Autoregressive Modeling for Speech Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/html/2502.03930v1

[82] VBench-2.0: A Framework for Evaluating Intrinsic Faithfulness in Video Generation Models, accessed on April 28, 2025, https://www.reddit.com/r/artificial/comments/1jmgy6n/vbench20_a_framework_for_evaluating_intrinsic/

[83] NeurIPS Poster GenRec: Unifying Video Generation and Recognition with Diffusion Models, accessed on April 28, 2025, https://neurips.cc/virtual/2024/poster/94684

[84] Evaluation of Text-to-Video Generation Models: A Dynamics Perspective - OpenReview, accessed on April 28, 2025, https://openreview.net/forum?id=tmX1AUmkl6¬eId=MAb60mrdAJ

[85] [CVPR 2024] EvalCrafter: Benchmarking and Evaluating Large Video Generation Models - GitHub, accessed on April 28, 2025, https://github.com/evalcrafter/EvalCrafter

[86] [2412.18688] Video Is Worth a Thousand Images: Exploring the Latest Trends in Long Video Generation - arXiv, accessed on April 28, 2025, https://arxiv.org/abs/2412.18688

立委科普:揭秘AI创作视频的两种“神功”

0.53 复制打开抖音,看看【立委的作品】# 视频生成 # 大模型科普 # notebook... https://v.douyin.com/kUWrLBDJniQ/ [email protected] oQK:/ 08/05

 

最近,你一定被社交媒体上那些由人工智能(AI)创作的视频刷屏了吧?无论是“雪中的东京街景” 1,还是“机器人赛博朋克生活” 1,抑或是各种天马行空的想象,AI似乎一夜之间掌握了导演和摄像的魔法,生成的视频效果越来越逼真、流畅,甚至充满了电影感 2。这不禁让人惊叹:AI究竟是如何学会制作视频这门复杂的艺术的?

视频生成的“难言之隐”

在我们揭晓AI的“神功秘籍”之前,先得理解相比于生成一张静态图片,视频的挑战要大得多。这不仅仅是画出好看的画面,更关键的是要让画面动起来,而且要动得自然、连贯 3

想象一下,视频是由一连串的图片(称为“帧”)组成的。AI不仅要确保每一帧都清晰美观,还要保证:

    1. 时间连贯性(Temporal Coherence): 相邻帧之间的过渡要平滑,物体运动要符合规律,不能出现“瞬移”或者“闪烁” 4。就像电影里的人物走路,动作得是连贯的。
    2. 内容一致性: 视频中的物体和场景要保持一致性,比如一个人的衣服颜色不能随意变化,背景也不能突然改变 14
    3. 物理常识: 生成的动态需要符合基本的物理规律,比如球会往下落,水会流动 1。虽然目前的AI还做不到完美,但仿真客观世界是方向。
    4. 数据与计算需求: 视频数据量巨大,处理起来需要强大的计算能力和海量的训练数据 5

正因为这些挑战,AI视频生成领域发展出了不同的技术流派。目前,最主流的有两大“门派”,它们解决问题的方式截然不同,各有千秋 4

两大门派是:自回归(AR)与扩散(Diffusion)

想象一下AI是位艺术家,要创作一段视频。现在有两种主流的创作方式:

    • 第一种方式,像个“讲故事的人”(Storyteller)或者“按顺序作画的画家”(Sequential Painter)。 他会一帧接一帧地构思和绘制,确保后面的画面能接得上前面的情节。这种方法,我们称之为自回归(Autoregressive, AR)模型 4
    • 第二种方式,则像个“雕刻家”(Sculptor)或者“照片修复师”(Photo Restorer)。 他先拿到一块粗糙的“素材”(一堆随机的噪点),然后根据你的要求(比如文字描述),一点点地打磨、雕琢,逐渐让清晰的画面显现出来。这种方法,就是扩散(Diffusion)模型 4

这两种方法各有神通,也各有“脾气”。让我们分别来了解一下。

第一式:自回归(AR)模型的“顺序叙事法”

自回归模型的核心思想非常直观:预测下一帧,基于之前的视频流 4,就是AI在生成第N帧画面时,会参考前面已经生成的1到N-1帧 10。这种方式强调的是视频内在的时间顺序和因果关系(sequential and causal)。

    • “讲故事”的比喻: 就像讲故事,下一句话总要承接上一句话的意思,才能构成一个连贯的情节。AR模型就是这样,它努力让每一帧都成为前一帧合乎逻辑的延续。
    • “顺序作画”的比喻: 也像一位画家在绘制连环画,他会一幅一幅地画,每画新的一幅,都要确保它和已经完成的部分在风格、颜色、内容上都能衔接起来。

自回归模型是怎么工作的?

早期的一些AR模型,会先把复杂的图像或视频“打碎”,编码成一种叫做“视觉词元”(visual tokens)的东西 26。你可以把它想象成给视觉世界创建了一本“词典”,每个词元代表一种视觉模式。然后,AR模型就像学习语言一样,学习预测下一个“视觉词元”应该是什么 29

不过,这种“打碎再组合”的方式可能会丢失一些细节。因此,更新的AR模型,比如备受关注的NOVA 30 和FAR 28 等,开始尝试跳过“视觉词元”这一步,直接在连续的视觉信息上进行操作 52。它们甚至借鉴了扩散模型的一些思想,比如使用类似的数学目标来学习 29。这就像讲故事的人不再局限于有限的词汇,而是开始使用更丰富、更细腻的表示手段来描述世界。这种不依赖“量化”(quantization)词元的方式,被认为是AR模型发展的一个重要方向,旨在结合AR模型擅长的连贯性与扩散模型擅长的高保真度 30

AR模型的“独门绝技”(优点):

    • 天生连贯: 由于是一帧接一帧生成,AR模型在保持视频的时间连贯性和逻辑流畅性方面具有天然优势 4
    • 长度灵活: 理论上,只要计算资源允许,AR模型可以一直“讲下去”,生成任意长度的视频 4
    • 与语言模型“师出同门”: AR模型(尤其是基于Transformer架构的 26)和现在非常强大的大语言模型(LLM)在底层逻辑上相同(都是预测序列中的下一个元素),能更好地借鉴LLM的训练方法和可扩展的经验法则,有更大的品质提升空间 26

AR模型的“难念的经”(缺点):

    • 生成速度慢: “一帧一帧来”的特性决定了它的生成速度相对较慢,尤其是对于高分辨率、长时长的视频 4
    • “一步错,步步错”: 如果在生成过程中某一步出了差错,这个错误可能会像滚雪球一样被带到后面的帧中,导致视频内容逐渐偏离主题或出现不一致 4
    • 早期质量瓶颈: 过去依赖“视觉词元”的AR模型,其生成质量会受限于词元对真实世界细节的表达能力 29。不过,如前所述,新的非量化方法正致力于解决这个问题 30

值得注意的是,虽然AR模型天生是序列化的,看起来很慢,但研究人员正在努力克服这个瓶颈。例如,NOVA模型采用了一种“空间集对集”(spatial set-by-set)的预测方式,在生成帧内画面时,不是逐个像素生成,而是并行地预测一片片的视觉信息 30。还有一些技术,比如并行解码 59 和缓存(KV caching)机制 31,都在尝试让AR模型的生成过程更快。有些研究甚至声称,经过优化的AR模型在生成速度上可以超过传统的扩散模型 36。这表明,AR模型的“慢”可能更多是一个可以通过工程和算法创新来缓解的问题,而非无法逾越的理论障碍。

第二式:扩散(Diffusion)模型的“去粗取精法”

扩散模型是在图像生成领域大放异彩的技术,现在也成为了视频生成的主力军 3。它的核心思想有点反直觉:先破坏,再修复 4

想象一下,你有一段清晰的视频。扩散模型的“前向过程”(forward process)就是不断地、逐步地给这段视频添加随机的“噪声”(noise),直到它变成一片完全无序的、类似电视雪花点的状态 3

AI学习的,则是这个过程的“逆向过程”(reverse process):从一堆纯粹的噪声开始,一步一步地、迭代地去除噪声,最终“还原”出一段清晰、有意义的视频 3。这个去噪过程是受到用户指令(比如文字描述)引导的。

    • “雕刻家”的比喻: AI就像一位雕刻家,面对一块充满随机纹理的“璞玉”(噪声),根据设计图(文字提示),一刀一刀地剔除多余部分,最终呈现出精美的作品(视频)。
    • “照片修复师”的比喻: 也像一位顶级的照片修复师,拿到一张几乎完全被噪声覆盖的旧照片,凭借高超技艺和对照片内容的理解(文字提示),逐步去除污点和模糊,让清晰的影像重现。

扩散模型是怎么工作的?

扩散模型的关键在于迭代。从完全随机的噪声到最终的清晰视频,需要经历很多(通常是几十到几千)个小的去噪步骤 3

为了提高效率,很多先进的扩散模型,比如Stable Diffusion、Sora等 1,采用了潜在扩散模型(Latent Diffusion Model, LDM)的技术 5。它们不是直接在像素级别的高维视频数据上进行加噪去噪,而是先用一个“编码器”将视频压缩到一个更小、更抽象的“潜在空间”(latent space),在这个低维空间里完成主要的扩散和去噪过程,最后再用一个“解码器”将结果还原和渲染成高清像素视频。这就像雕刻家先做一个小尺寸的泥塑模型来构思,而不是直接在巨大的石料上动工,大大节省了时间和精力 16

在模型架构方面,扩散模型早期常用类似U-Net(就是CNN)的网络结构 11,后来也越来越多地采用更强大的Transformer架构(称为Diffusion Transformer, DiT) 14,这些架构充当了AI进行“雕刻”或“修复”的核心工具。

扩散模型的“看家本领”(优点):

    • 画质惊艳: 扩散模型目前在生成图像和视频的视觉质量上往往是顶尖的,细节丰富、效果逼真 2
    • 处理复杂场景: 对于复杂的纹理、光影和场景结构,扩散模型通常能处理得更好 1
    • 训练更稳定: 相较于生成对抗网络(GANs)等早期技术,扩散模型的训练过程通常更稳定,不容易出现模式崩溃等问题 4

扩散模型的“阿喀琉斯之踵”(缺点):

    • 生成(采样)速度慢: 迭代去噪的过程需要很多步,导致生成一个视频需要较长时间 4。雕刻家精雕细琢是需要时间的。
    • 时间连贯性仍是挑战: 虽然单帧质量高,但要确保长视频中所有帧都完美连贯、动作自然流畅,对扩散模型来说依然是一个难题 4。雕刻家可能过于专注于局部细节,而忽略了整体的协调性。
    • 计算成本高昂: 无论是训练模型还是生成视频,扩散模型都需要强大的计算资源(如图形处理器GPU) 4,这限制了其普及应用 83

面对速度慢这个核心痛点,研究界掀起了一场“加速竞赛”。除了前面提到的LDM,还涌现出许多旨在减少采样步骤的技术。例如,一致性模型(Consistency Models) 19 试图学习一种“直达”路径,让模型能从噪声一步或几步就生成高质量结果。还有像分布匹配蒸馏(Distribution Matching Distillation, DMD) 34 这样的技术,通过“蒸馏”一个慢但强大的“教师”模型的知识,训练出一个快得多的“学生”模型。这些努力的目标都是在尽量不牺牲质量的前提下,让扩散模型的生成速度提升几个数量级,达到接近实时应用的水平 83

同时,为了解决时间连贯性问题,研究者们也在不断改进扩散模型的架构和机制。比如,在模型中加入专门处理时间关系的时间注意力(temporal attention)11,利用光流(optical flow)信息来指导运动生成 16,或者设计像Enhance-A-Video 14 或Owl-1 24 这样的特殊模块或框架来增强视频的流畅度和一致性。这表明,在单帧画质达到较高水平后,如何让视频“动得更像样”、“故事更连贯”,已成为扩散模型发展的下一个重要关口。

如何选择?“顺序叙事” vs “去粗取精”

了解了这两种“神功”后,我们可能会问:哪种更好?其实没有绝对的答案,它们各有侧重。

我们可以用一个简单的表格来总结一下:

AR 与 Diffusion 模型速览

特性 (Feature) 自回归模型 (AR) 扩散模型 (Diffusion)
核心思想 (Core Idea) 顺序预测 (Sequential Prediction) 迭代去噪 (Iterative Denoising)
形象比喻 (Analogy) 讲故事者/连环画画家 (Storyteller/Painter) 雕刻家/照片修复师 (Sculptor/Restorer)
主要优势 (Key Strength) 时间连贯性/流畅性 (Temporal Coherence) 视觉质量/细节 (Visual Quality)
主要劣势 (Key Weakness) 采样慢/易出错 (Slow Sampling/Error Risk) 采样慢/连贯性挑战 (Slow Sampling/Coherence)

简单来说,如果你特别看重视频故事线的流畅和逻辑性,尤其是在生成很长的视频时,AR模型天生的顺序性可能更有优势 4。而如果你追求的是极致的画面细节和逼真度,扩散模型目前往往能提供更好的视觉效果 4。但正如我们看到的,这两种技术都在快速进化,互相学习,界限也变得越来越模糊。

融合之道:当“叙事者”遇上“雕刻家”

既然AR和Diffusion各有擅长,一个自然的想法就是:能不能让它们“联手”,取长补短呢? 4

答案是肯定的,而且这正成为当前AI视频生成领域一个非常热门的趋势。许多最新的、表现优异的模型都采用了混合(Hybrid)架构,试图融合AR和Diffusion的优点。

    • 思路一:分工合作。 让AR模型先负责“打草稿”,规划视频的整体结构和运动走向(可能细节不多),然后让Diffusion模型来“精雕细琢”,填充高质量的视觉细节 61
    • 思路二:AR骨架,Diffusion内核。 保留AR模型的顺序生成框架,但在预测每一帧(或每一部分)时,不再是简单预测下一个“词元”,而是使用类似Diffusion模型的连续空间预测方法和损失函数 29。前面提到的NOVA和FAR就体现了这种思想。
    • 思路三:Diffusion骨架,AR思想。 在Diffusion模型的框架内,引入AR的原则,比如强制更严格的帧间顺序依赖(causal attention),或者让噪声的添加/去除过程体现出时序性 9。AR-Diffusion 9 和CausVid 34 等模型就是例子。

这种融合趋势非常明显。看看研究论文列表,你会发现大量模型名称或描述中都包含了AR和Diffusion的元素(如AR-Diffusion, ARDiT, DiTAR, LanDiff, MarDini, ART-V, CausVid, Transfusion, HART等) 9。这表明,研究界普遍认为,结合两种方法的优点是克服各自局限、推动视频生成技术向前发展的关键路径。这不再是“二选一”的问题,而是如何更聪明地“合二为一”。

前路漫漫:AI视频的挑战与梦想

尽管AI视频生成技术进步神速,但距离完美还有很长的路要走。目前主要面临以下挑战 4

    • 制作更长的视频: 目前大部分AI生成的视频还比较短(几秒到十几秒)。要生成几分钟甚至更长的视频,同时保持内容连贯、不重复、不“跑题”,仍然非常困难 4
    • 更精准的控制与忠实度: 如何让AI精确理解并执行复杂的指令?比如,“一只戴着贝雷帽、穿着黑色高领毛衣的柴犬” 49,或者更复杂的场景描述、人物动作和情感表达。目前AI有时还会“听不懂”或者“产生幻觉”,生成与要求不符的内容 1
    • 更快的生成速度: 要让AI视频生成工具真正实用化,尤其是在交互式应用中,速度至关重要。目前的生成速度对于很多场景来说还是太慢了 4
    • 理解真实世界物理: AI需要学习更多关于现实世界的物理常识。比如,物体应该有固定的形状(不会随意变形),运动应该符合基本的力学原理。OpenAI Sora模型展示的弱点中,就有篮球穿过篮筐后爆炸 1,或者椅子在挖掘过程中变形 1 这样不符合物理规律的例子。让AI拥有“常识”是实现更高层次真实感的关键 1

尽管挑战重重,但AI视频生成的未来充满想象空间:

    • 个性化内容创作: 想象一下,AI可以根据你的想法,为你量身定做一部微电影,甚至让你成为主角 9。或者,生成完全符合你学习节奏和风格的教学视频。
    • 赋能创意产业: 为艺术家、设计师、电影制作人提供强大的新工具,极大地拓展创意表达的可能性 2
    • 构建虚拟世界与模拟: AI不仅能生成视频,更能构建出能够模拟真实世界运行规律的“世界模型”(World Models) 4。这意味着AI可以用来进行科学模拟、游戏环境生成、自动驾驶仿真训练等 5。这种从“生成图像”到“模拟世界”的转变,显示了AI视频技术的深层雄心:不仅仅是模仿表象,更要理解内在规律 1
    • 统一的多模态智能: 未来的AI将能够无缝地理解和生成包括文本、图像、视频、音频在内的多种信息形式 4

实现这些梦想,离不开对效率的极致追求。无论是生成长视频、实现实时交互,还是构建复杂的“世界模型”,都需要巨大的计算力。因此,不断提升模型的训练和推理效率,降低成本,不仅仅是为了方便,更是为了让这些更宏大的目标成为可能 4。可以说,效率是解锁未来的关键钥匙。

结语:视觉叙事的新纪元

AI视频生成技术正以惊人的速度发展,不断刷新我们的认知 3。无论是像“讲故事的人”一样按部就班的自回归模型,还是像“雕刻家”一样精雕细琢的扩散模型,亦或是集两者之长的混合模型 4,它们都在努力学习如何更好地用像素编织光影,用运动讲述故事。

我们正站在一个视觉叙事新纪元的开端。AI不仅将改变我们消费内容的方式,更将赋予每个人前所未有的创作能力。当然,伴随着技术的飞速发展,我们也需要思考如何负责任地使用这些强大的工具,确保它们服务于创造、沟通和理解,而非误导和伤害 4

未来已来,AI导演的下一部大片,或许就源自你此刻的灵感。让我们拭目以待!