Llama 3 Notes and Llama MV with Llama 3.1 Legend

Notes on the 92-page Paper Released with Meta's Super Large Model Llama 3.1

The super-large model Llama 3.1 is a milestone in the open-source large model community. As a leader, Meta's project involved over 500 participants/contributors (the authors of this paper are listed alphabetically in the appendix, similar to how the Central Committee members' names are displayed by stroke order). This original text is full of implementation details:

meta Llama 3.1 paper

AIGC MV using Suno and keling (just for fun & cheering opensource milestone)

Notes:

  1. Llama 3.1 doesn't use sparse techniques, it's not a multi-expert system like model 4, but a dense model.
  2. 405B parameters, 15.6T tokens: The number of tokens is 40 times the number of parameters. Large-scale top models now emphasize data growth far exceeding parameter growth. Is this 15T tokens of data open source? (No, because even if they were willing to open source it, they wouldn't dare, as it could lead to countless data infringement lawsuits)
  3. Emphasizes three major levers for super-large foundation models: data, scale, and managing complexity.
  4. Compared to the previous generation system Llama 2, computational power has increased 50 times (using 3.8 × 10^25 FLOPs).
  5. Complexity management: (1) Choosing a standard dense Transformer architecture instead of a mixture of experts model to maximize training stability. (2) Adopting a relatively simple post-training procedure: Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). In other words, algorithm design and implementation tend towards simplification. Not using sparse techniques and multi-expert systems is for stability (but training challenges are greater, though they're not afraid). Using simpler, easier-to-implement DPO in the post-training phase instead of reinforcement learning is also for stability, as reinforcement learning has always been difficult to handle.
  6. Benchmark tests cover: general, code, math, reasoning, tool use, long context, and multilingual. All performances are SOTA (state-of-the-art international level).
    • MMLU (Massive Multitask Language Understanding): 405B model achieves 87.3% (5-shot), 88.6% (0-shot, CoT).
    • Code generation (HumanEval): 405B model reaches 89.0%, close to GPT-4.
    • Math problems (GSM8K): 405B model achieves 96.8%, slightly higher than GPT-4.
    • Long context tasks: Excellent performance on some tasks, such as 95.2% on QuALITY.
    • Multilingual tasks (MGSM): 405B model reaches 91.6%, on par with top models. The 405B model is comparable or close to GPT-4 and Claude 3.5 Sonnet on many tasks. In short, open-source has caught up with closed-source.
  7. Pre-training started with an 8k window, expanded to a 128k window in the later stages of pre-training (continued training).
  8. After the foundation model pre-training was completed, multiple iterations of alignment "post-training" were performed. Including: (1) Aligning the model through human feedback, including multiple rounds of Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO); (2) Integrating new capabilities, such as tool use; (3) Enhancing coding and reasoning abilities (specialized optimization); (4) Safety alignment.
  9. Multimodal expansion (in progress, not yet released): Image, video, and speech capabilities. Including (1) Multimodal encoder pre-training: Image encoder trained on a large number of image-text pairs, aligning visual content and natural language in a unified space; (2) Speech self-training? (3) Experiments on video-text data alignment based on images.
  10. Language model as the core, other modalities are added later (whether added to pre-training and/or post-training). When expanding to multimodal, the language model parameters remain unchanged, adapting to multimodality, allowing multimodal alignment in the same semantic space, closer to the language model. In other words, Llama follows a modular, step-by-step approach to gradually expand to multimodality. This is not the mainstream approach (mainly referring to Open AI and Google, at least in theory) advocating for "unified multimodal native data joint pre-training". The overall impression of Llama's algorithmic strategies is seeking stability rather than innovation or unification. It tends towards practicality, not caring about leading in algorithms. For example, the integration of speech first involves speech self-training (because speech is actually very similar to text, both being language systems), then alignment between speech and text (including Automatic Speech Recognition ASR and Text-to-Speech TTS). Integrating step by step into the cross-modal large model, this approach isn't cutting-edge in terms of advancement, but it's steady progress, beneficial for engineering development, integration, and iteration. It's unclear when they will be able to release multimodal capabilities online.
  11. Data collection and cleaning work is very complex, but the Llama team is meticulous, which is also the data guarantee for its quality to catch up with SOTA. To recap: (1) De-duplication: URL-level de-duplication; Document-level de-duplication using MinHash algorithm; Row-level de-duplication: removing rows appearing more than 6 times every 30M documents. (2) Filtering: Removing low-quality documents, outliers, and excessively repetitive documents, using repetitive n-gram coverage to remove repetitive content (such as logs or error messages); using "dirty word" counts to filter adult websites not covered by blacklists; using token distribution KL divergence to filter documents with too many abnormal tokens. (3) Controlling data quality: Using fasttext classifier to identify text that might be cited by Wikipedia; using a Roberta-based classifier trained on Llama 2's predictions; using DistilRoberta to generate document quality scores. Also, fasttext language classifier can identify 176 languages; specially filtering two types of information: adult content and personal identity/privacy information. Special fine processing for code and math web pages.
  12. Data proportions: For example, downsampling over-represented data categories on the web (such as art and entertainment); data mixing ratios determined by a series of small model experiments, final data mix summary: About 50% of tokens correspond to general knowledge; 25% of tokens involve math and reasoning; 17% of tokens are code; 8% of tokens are multilingual content.
  13. Model architecture: Apart from empirical detail adjustments, the basic architecture of the dense model remains unchanged, so it's data and scaling that create top models. 405B model specific parameters: 126 layers; token representation dimension 16,384; 128 attention heads; model size of 405B determined according to scaling law, about the computational optimal size under 3.8 × 10^25 FLOPs training budget.
  14. Vocabulary: Using a vocabulary of 128K tokens. Combines 100K tokens from the tiktoken3 tokenizer and 28K additional multilingual tokens to better support non-English languages.
  15. Computing resources, including GPU clusters of tens of thousands of cards, massive storage, and high-speed networks, represent huge resource investments. Specific data as follows: Computing resources:
    • Used up to 16,000 H100 GPUs (a very powerful graphics processor).
    • Each GPU has 80GB of high-bandwidth memory, with a power of 700W.
    • These GPUs are installed on servers designed by Meta itself, with 8 GPUs and 2 CPUs per server. Storage system:
    • Uses a distributed file system called Tectonic.
    • Provides 240PB (1PB=1000TB) of storage space, distributed across 7,500 servers.
    • Can process 2TB of continuous data per second, with a peak of 7TB/second.
    • A major challenge is handling the large amount of burst writes generated when processing model checkpoints (the process of saving model states).
  16. Three-step pre-training process: a) Initial pre-training; b) Long context continued pre-training; c) Annealing with high-quality data sources Key pre-training strategies:
    • Gradually increase batch size and sequence length to balance stability and efficiency.
    • Dynamically adjust data mixing to specifically enhance certain capabilities.
    • Increase context length in stages to avoid early computational overhead.
    • Use annealing and high-quality data in the late stages of training to fine-tune model performance.

 

[LLM Summary]

Llama 3: Meta's Open-Source Large Language Model Breakthrough**

1. Introduction and Overview

Meta has introduced Llama 3, a series of foundation language models designed to support various tasks including multilingual processing, programming, reasoning, and tool use. This model series includes versions with 8B, 70B, and 405B parameters, with the largest 405B parameter model adopting a dense Transformer architecture and supporting context windows of up to 128K tokens. The development of Llama 3 highlights three key factors: data quality and scale, computational scale, and complexity management.

2. Model Architecture and Pre-training Strategy

2.1 Model Architecture

Llama 3 retains the standard dense Transformer architecture rather than adopting a mixture of experts model. This choice aims to maximize training stability, reflecting Meta's emphasis on simplifying design to manage complexity. Key architectural improvements include:
- Using Grouped-Query Attention (GQA) mechanism, with 8 key-value heads per attention layer.
- Introducing attention masks to prevent self-attention between different documents in the same sequence.
- Expanding the vocabulary to 128K tokens, combining 100K tokens from the tiktoken3 tokenizer and 28K additional multilingual tokens.
- Increasing the RoPE base frequency hyperparameter to 500,000 to support longer contexts.

2.2 Pre-training Data Processing

Llama 3's pre-training data processing is extremely rigorous, including:
- Multi-level deduplication: URL-level, document-level (using MinHash algorithm), and row-level deduplication.
- Heuristic filtering: Removing low-quality documents, outliers, and excessively repetitive content.
- Model-based quality filtering: Using fasttext and Roberta-based classifiers for quality assessment.
- Special content processing: Developing specialized processing pipelines for code and mathematical content.
- Multilingual data processing: Using fasttext base language identification model, supporting 176 languages.
- Safety and privacy protection: Filtering website data containing personally identifiable information (PII) and unsafe content.

2.3 Pre-training Strategy

The pre-training process is divided into three main stages:
1. Initial pre-training: Conducted on about 15T multilingual tokens, far exceeding Llama 2's 1.8T tokens.
2. Long context pre-training: Gradually expanding from initial 8K tokens to 128K tokens context window.
3. Annealing phase: Fine-tuning with high-quality data in the final stage, using Polyak averaging to generate the final model.

Data mixing ratios are carefully designed:
- 50% general knowledge
- 25% mathematics and reasoning
- 17% code
- 8% multilingual content

3. Training Infrastructure and Challenges

3.1 Computational Resources
- Using up to 16K H100 GPUs, each equipped with 80GB HBM3 memory.
- Adopting a 4D parallel strategy: tensor parallelism, pipeline parallelism, context parallelism, and data parallelism.

3.2 Storage System
- Using the Tectonic distributed file system, providing 240PB of storage space.
- Supporting 2TB/s sustained throughput, with peak capacity of 7TB/s.

3.3 Network Optimization
- Developing the NCCLX communication library to improve network efficiency.
- Designing specific network topologies and load balancing strategies.

3.4 Training Challenges
- Experiencing 466 job interruptions during the 54-day training period, 419 of which were unexpected.
- Developing automated systems and specialized tools to handle hardware failures and network issues.

4. Post-training and Alignment

Llama 3 adopts a multi-round iterative post-training process, including:
1. Supervised Fine-Tuning (SFT)
2. Direct Preference Optimization (DPO)
3. Reward model training: Using human feedback data
4. Safety alignment: Implementing multiple rounds of safety measures

This process not only improves the model's instruction-following capabilities but also enhances safety and specific abilities (such as coding and reasoning).

5. Multimodal Expansion

Although not officially released yet, Llama 3 demonstrates promising multimodal capabilities:
- Image recognition: Training independent image encoders, integrated with the language model through adapters.
- Video understanding: Adding video adapters based on image adapters.
- Speech processing: Independently training speech encoders, then aligning with the language model.

This modular approach allows flexible addition of new modalities while maintaining core language capabilities.

6. Performance Evaluation

Llama 3 performs excellently in multiple benchmark tests:
- MMLU (5-shot): 87.3%
- HumanEval (code generation): 89.0%
- GSM8K (math problems): 96.8%
- Long context tasks (like QuALITY): 95.2%
- MGSM (multilingual tasks): 91.6%

These results indicate that Llama 3 405B is comparable or close to GPT-4 and Claude 3.5 Sonnet on multiple tasks, particularly excelling in document understanding and long context tasks.

7. Safety Considerations

Meta highly prioritizes safety in the development of Llama 3:
- Implementing strict safety measures in both pre-training and post-training stages.
- Developing the Llama Guard system-level safety solution.
- Conducting extensive red team testing and risk assessments.

8. Open Source Impact and Future Directions

Meta's decision to publicly release the entire Llama 3 series, including the 405B parameter version, may have far-reaching impacts on the AI research community:
- Promoting open, responsible AI development.
- Accelerating AGI research progress.
- Providing researchers with opportunities to examine and improve large-scale language models.

Future development directions may include:
- Further improving multimodal integration.
- Expanding context length.
- Continuously enhancing data quality and model scale.

9. Conclusion

The development of Llama 3 demonstrates Meta's deep experience and forward-thinking in large-scale AI systems. By focusing on three key levers - data quality, computational scale, and complexity management - Llama 3 has reached or approached the current state-of-the-art level on several key benchmarks. Its open-source release may drive a wave of innovation across the entire AI field, paving the way for responsible AGI development.

Llama 3: Meta's AI Chef's Latest "Divine Delicacy"

Attention, all tech enthusiasts! The Michelin three-star AI chef Meta has just unveiled a new dish! This divine delicacy named "Llama 3" is not only spicy enough but will elevate your taste buds to new heights!

1. The Chef's Secret Weapon

Imagine Llama 3 as a super nanny who speaks 8 languages, writes code, does math, and can be your personal assistant. She can handle a kindergarten full of rambunctious kids (8B version), manage a mid-sized company (70B version), or even govern a small country (405B version)! This 405B big sister can remember 128,000 "gossips" (oh no, I mean context) simultaneously, essentially a walking encyclopedia + supercomputer!

2. Ingredient Selection: Only the Freshest!

Llama 3's chefs are masters at picking ingredients:

  • They "fished" 15 trillion words from the internet, nearly 10 times more than the previous generation!
  • Half of these words are everyday life seasonings, a quarter are math problems and brain teasers, nearly a fifth are programmer spells, and the rest are dialects learned from world travels.
  • They even invented a super weed remover, filtering out all the online garbage, repetitive, and unhealthy stuff.

3. Cooking Process: Three-Step Stir-Fry Method

Step 1: "Slow Simmer" - Start with a regular stove (8K context) to cook it halfway. Step 2: "High Heat Stir-Fry" - Switch to a super stove (gradually increasing to 128K context), reducing the sauce to be thick and fragrant. Step 3: "Low Heat Finish" - Finally, a gentle simmer with the best ingredients, the legendary "annealing" (even the chefs don't know why it's called that), bringing the flavor to its peak!

4. Kitchen Equipment: Top-of-the-Line Luxury Version

  • 16,000 super high-power induction cookers (H100 GPUs) firing simultaneously!
  • A refrigerator that could fit half the Pacific Ocean (240PB storage)!
  • A proprietary ingredient prep system faster than 5G (NCCLX communication library)!

Imagine all these stoves firing at once, making the kitchen feel like a sauna. But our chefs persevered through the heat, changing chef uniforms 466 times in 54 days to whip up this dish!

5. Training Method: Both Cute and Well-Mannered

Being a good cook isn't enough; you've got to have manners too! So our chefs began a long "training" process:

  • First came a round of "gentle education" (supervised fine-tuning)
  • Then the "carrot and stick" tactic (direct preference optimization)
  • Finally, they invited moral role models (safety alignment) for guidance

After all this fuss, Llama 3 not only cooks well but also knows how to please people, program, do math, and mind her manners - a true decathlon champion!

6. Special Side Dishes: Showcasing Multiple Talents

Don't think Llama 3 can only cook; she's a multi-talented "goddess":

  • Storytelling from images? Piece of cake!
  • Writing movie reviews? No problem!
  • Recognizing songs and even singing a bit? The karaoke queen!

Although these "talents" are still in practice, they already show the potential of Li Bai's "from black hair to snow white in a day"!

7. A True Powerhouse: Dazzling Test Scores

Llama 3 participated in a series of "Top Chef Competitions," with eye-popping scores:

  • College Entrance Exam (MMLU): 87.3 points (out of 100)
  • Programmer Interview (HumanEval): 89 points (out of 100)
  • Math Olympiad (GSM8K): 96.8 points (out of 100)
  • Long Novel Reading Comprehension (QuALITY): 95.2 points (out of 100)

Bring this report card home, and even a "Tiger Mom" would be grinning from ear to ear!

8. Safety First: AI's "Security Captain"

Meta's chefs know well the principle of "don't leave guns and ammo lying around." They've assigned Llama 3 a 24/7 bodyguard team (Llama Guard) to prevent her from accidentally saying or doing the wrong thing. They even arrange occasional "moral exams" to ensure she doesn't turn into a "Terminator."

9. Open Source Feast: Everyone Can Be a Master Chef!

The most impressive part is that Meta decided to make the recipe for this "divine delicacy" completely public! It's like a Michelin three-star restaurant putting their signature dish's recipe online. Now anyone who wants to can whip it up at home! This move not only shocked other master chefs but also made countless food lovers cheer with joy!

10. Future Outlook: Reaching New Heights

Meta's chefs aren't resting on their laurels; they're already pondering the next "divine delicacy":

  • Maybe a dancing Llama 4?
  • Or a painting Llama 5?
  • Who knows, one day we might see a Llama 6 composing symphonies!

In short, the AI world's "Michelin" journey has only just begun!

Epilogue

The birth of Llama 3 not only elevates Meta's status in the AI world but also brings a fresh breeze to the entire AI research community. This bowl of "Llama soup" is not only delicious but also brings unlimited imagination to everyone. What will the future of AI be like? Let's wait and see what flavor the next "divine delicacy" will be!

辣妈3笔记 以及辣妈MV与辣妈3.1传奇

Meta超大模型辣妈3.1 发布时的92页论文读书笔记

超大模型 辣妈3.1 是大模型开源社区的里程碑。作为 leader,Meta 该项目的参与者/贡献者有 500 多人(这篇论文的作者署名作为附录按字母顺序排列,跟中央委员姓氏笔划公示似的)。这是充满了实现细节描述的原文

meta Llama 3.1 paper

AIGC MV (just for fun & cheering opensource milestone)

【立委笔记】

1. 辣妈3.1 没有用稀疏技术,不是 model 4 那种多专家系统,而是一个 dense model

2. 405B参数,15.6T tokens:token 数是参数的 40 倍

超大规模头部模型现在强调的都是数据量远大于参数量的增长,这 15T tokens 的数据也是开源的吗?(不会,因为即便愿意开源,也没这个胆子,可能会引起无数数据侵权的官司)

3. 强调超大基础模型的三大杠杆:data, scale, and managing complexity.

4. 比上一代系统辣妈2,计算量增加了 50 倍 (using 3.8 × 1025 FLOPs)

5. 复杂性管理:(1)选择标准的密集Transformer架构,而非混合专家模型,以最大化训练稳定性。(2)采用相对简单的后训练程序:监督微调(SFT)、拒绝采样(RS)和直接偏好优化(DPO)。就是说,算法设计和实现上,趋于简单化。不利用稀疏技术和分流多专家系统,为的是稳定性(但训练挑战更大,但他们不怕)。后训练阶段用更简单、容易实现的DPO,而不用强化学习,也是为了稳定性,因为强化学习从来就不好缠。

6 基准测试涵盖:通用、代码、数学、推理、工具使用、长上下文和多语言。全部表现为 SOTA(国际先进水平)。

MMLU(大规模多任务语言理解):405B模型达到87.3%(5-shot),88.6%(0-shot,CoT)。
代码生成(HumanEval):405B模型达到89.0%,接近GPT-4。
数学问题(GSM8K):405B模型达到96.8%,略高于GPT-4。
长上下文任务:在某些任务上表现优异,如QuALITY达到95.2%。
多语言任务(MGSM):405B模型达到91.6%,与顶级模型持平。

405B模型在许多任务上与GPT-4和Claude 3.5 Sonnet相当或接近。一句话,开源追平了闭源。

7. 预训练开始是 8k 窗口,在预训练后期(继续训练)时候扩展到了 128k 窗口。

8. 基础模型预训练完成后,经过多次迭代的对齐“后训练”。

包括:(1)通过人类反馈对齐模型,包括多轮的监督微调(SFT)和直接偏好优化(DPO);(2)集成新能力,如工具使用;(3)增强编码和推理能力(专项优化);(4)安全对齐。

9. 多模态扩展(进行中,未上线发布):图像、视频和语音能力。

包括 (1)多模态编码器预训练:图像编码器在大量图像-文本对上训练,统一空间里对齐视觉内容和自然语言;(2)语音独自自训练?(3) 在图片基础上,进一步作视频-文本数据对齐的实验。

10. 语言模型为核心,其他模态都是后加的(无论加到预训练 and/or 后训练)。

在扩展到多模态的时候,语言模型保持参数不变,适配的是多模态,让多模态在同一个意义空间对齐、靠近语言模型。换句话说,辣妈是遵循模块化、步骤化的方式逐渐扩展到多模态。而不是采用主流(主流主要指 Open AI 和谷歌,至少在理论上)倡导的“大一统的多模态原生数据的联合预训练”。

辣妈的所有算法策略,总的印象是求稳,而不是求创新 或 求大一统。偏向于务实,不在乎自己的算法领先。例如语音的集成,先是语音的自训练(因为语音与文字其实很类似,都是语言体系),然后是语音与文字的对齐(包括语音识别 ASR 和语音合成 TTS)。一步一步集成进跨模态的大模型,这方面谈不上先进性,但一步一个脚印,有利于工程化的开发、集成和迭代。不知道他们什么时候可以上线发布多模态的能力?

11. 数据收集清洗工作非常繁杂,但辣妈团队一丝不苟,这也是它质量可以追平 SOTA 的数据保证。

盘点一下:

(1)去重(De-duplication):URL级别去重; 文档级别去重:使用MinHash算法;行级别去重:每30M文档移除出现超过6次的行。
(2)过滤:移除低质量文档、异常值和过度重复的文档,使用重复n-gram覆盖率移除重复内容(如日志或错误消息);使用"脏词"计数过滤未被黑名单覆盖的成人网站;使用token分布KL散度过滤含异常token过多的文档。
(3) 控制数据品质:使用fasttext分类器识别可能被维基百科引用的文本;使用基于Roberta的分类器,该分类器基于Llama 2的预测进行训练;使用DistilRoberta生成文档质量分数。还有 fasttext 的语言分类器,可识别176种语言;特别过滤两类信息:黄色;个人身份等隐私信息。对代码和数学的网页做特别的精细处理。

12. 数据比例:例如,对网络上过度表示的数据类别(如艺术和娱乐)进行降采样;数据混合比例由一系列小模型实验决定比例取舍,最终数据混合摘要:

约50%的token对应一般知识;25%的token涉及数学和推理;17%的token是代码;8%的token是多语言内容。

13. 模型架构:除了经验性细节调整,dense 模型的基本架构不变,所以是数据和规模化造就了头部模型。

405B模型具体参数:126层;token表示维度16,384;128个注意力头;根据scaling law决定模型大小为405B,约为3.8 × 10^25 FLOPs训练预算下的计算最优size。

14. 词汇表:使用128K个token的词汇表。结合了tiktoken3分词器的100K个token和28K个额外的多语言tokens,以更好地支持非英语语言。

15. 计算机资源,包括GPU万卡集群、海量存储和高速网络,巨大的资源投入。具体数据如下:

计算资源:
使用了多达16,000个H100 GPU(一种非常强大的图形处理器)。
每个GPU都有80GB的高带宽内存,功率为700W。
这些GPU被安装在Meta自己设计的服务器上,每个服务器有8个GPU和2个CPU。

存储系统:
使用了一个叫Tectonic的分布式文件系统。
提供了240PB(1PB=1000TB)的存储空间,分布在7,500台服务器上。
可以每秒处理2TB的持续数据,峰值可达7TB/秒。
一个主要挑战是处理模型检查点(保存模型状态的过程)时产生的大量突发写入。

16. 三步预训练过程:a) 初始预训练;b) 长上下文继续预训练;c) 用高质量数据源退火(Annealing)

预训练关键策略:
逐步增加批量大小和序列长度,以平衡稳定性和效率。
动态调整数据混合,针对性地提升特定能力。
分阶段增加上下文长度,避免早期的计算开销。
在训练后期使用退火和高质量数据,微调模型性能。

———————————

【LLM摘要】Llama 3:Meta的开源大模型突破

1. 引言与概述

Meta公司推出的Llama 3是一系列基础语言模型,旨在支持多语言处理、编程、推理和工具使用等多种任务。这个模型系列包括8B、70B和405B参数三个版本,其中最大的405B参数模型采用密集Transformer架构,支持多达128K tokens的上下文窗口。Llama 3的开发突显了三个关键因素:数据质量与规模、计算规模、以及复杂性管理。

2. 模型架构与预训练策略

2.1 模型架构
Llama 3沿用了标准的密集Transformer架构,而非采用混合专家模型。这一选择旨在最大化训练稳定性,体现了Meta对简化设计以管理复杂性的重视。主要的架构改进包括:
- 使用分组查询注意力(GQA)机制,每个注意力层有8个键值头。
- 引入防止同一序列中不同文档之间自注意力的注意力掩码。
- 扩展词汇表至128K tokens,结合了tiktoken3分词器的100K tokens和28K个额外多语言tokens。
- 将RoPE基频超参数增加到500,000,以支持更长的上下文。

2.2 预训练数据处理
Llama 3的预训练数据处理极为严格,包括:
- 多层次去重:URL级、文档级(使用MinHash算法)和行级去重。
- 启发式过滤:移除低质量文档、异常值和过度重复内容。
- 模型基础质量过滤:使用fasttext和基于Roberta的分类器进行质量评估。
- 特殊内容处理:为代码和数学内容开发专门的处理流程。
- 多语言数据处理:使用fasttext基础语言识别模型,支持176种语言。
- 安全与隐私保护:过滤包含个人可识别信息(PII)和不安全内容的网站数据。

2.3 预训练策略
预训练过程分为三个主要阶段:
1. 初始预训练:在约15T多语言tokens上进行,远超Llama 2的1.8T tokens。
2. 长上下文预训练:从初始的8K tokens逐步扩展到128K tokens的上下文窗口。
3. 退火阶段:在最后阶段使用高质量数据进行微调,并采用Polyak平均法生成最终模型。

数据混合比例经过精心设计:
- 50%通用知识
- 25%数学和推理
- 17%代码
- 8%多语言内容

3. 训练基础设施与挑战

3.1 计算资源
- 使用多达16K个H100 GPUs,每个GPU配备80GB HBM3内存。
- 采用4D并行策略:张量并行、流水线并行、上下文并行和数据并行。

3.2 存储系统
- 使用Tectonic分布式文件系统,提供240PB存储空间。
- 支持2TB/s的持续吞吐量,峰值可达7TB/s。

3.3 网络优化
- 开发NCCLX通信库,提高网络效率。
- 设计特定的网络拓扑和负载均衡策略。

3.4 训练挑战
- 在54天训练期间经历466次作业中断,其中419次为意外中断。
- 开发自动化系统和专门工具处理硬件故障和网络问题。

4. 后训练与对齐

Llama 3采用多轮迭代的后训练过程,包括:
1. 监督微调(SFT)
2. 直接偏好优化(DPO)
3. 奖励模型训练:使用人类反馈数据
4. 安全对齐:实施多轮安全措施

这一过程不仅提升了模型的指令遵循能力,还增强了安全性和特定能力(如编码和推理)。

5. 多模态扩展

虽然尚未正式发布,但Llama 3展示了promising的多模态能力:
- 图像识别:训练独立的图像编码器,通过适配器与语言模型集成。
- 视频理解:在图像适配器基础上添加视频适配器。
- 语音处理:独立训练语音编码器,然后与语言模型对齐。

这种模块化方法允许灵活添加新模态,同时保持核心语言能力不变。

6. 性能评估

Llama 3在多项基准测试中表现卓越:
- MMLU(5-shot):87.3%
- HumanEval(代码生成):89.0%
- GSM8K(数学问题):96.8%
- 长上下文任务(如QuALITY):95.2%
- MGSM(多语言任务):91.6%

这些结果表明Llama 3 405B在多个任务上与GPT-4和Claude 3.5 Sonnet相当或接近,特别是在文档理解和长上下文任务上表现突出。

7. 安全性考虑

Meta在Llama 3的开发过程中高度重视安全性:
- 预训练和后训练阶段均实施严格的安全措施。
- 开发Llama Guard系统级安全解决方案。
- 进行广泛的红队测试和风险评估。

8. 开源影响与未来方向

Meta决定公开发布Llama 3全系列模型,包括405B参数版本,这一决定可能对AI研究社区产生深远影响:
- 推动开放、负责任的AI发展。
- 加速AGI的研究进程。
- 为研究者提供检验和改进大规模语言模型的机会。

未来发展方向可能包括:
- 进一步改进多模态集成。
- 扩展上下文长度。
- 持续提升数据质量和模型规模。

9. 结论

Llama 3的开发展示了Meta在大规模AI系统方面的深厚经验和前瞻性思考。通过聚焦数据质量、计算规模和复杂性管理这三个关键杠杆,Llama 3在多个关键基准上达到或接近了当前最先进水平。其开源发布可能会推动整个AI领域的创新浪潮,为负责任的AGI开发铺平道路。

———————————

老友说大模型做摘要千篇一律 看了等于没看。其实那是因为学术圈摘要 人类就是这么写的 难免有点八股 怪不得大模型。其实大模型擅长各种表述 端看你怎么提点它了。例如,指令: 根据上面的《辣妈3之歌》,讲一个风趣幽默但不离谱的故事:

好的,我来尝试用更加风趣生动、通俗易懂的方式重新描述Llama 3这个庞然大物:

辣妈3号:Meta的AI大厨烹饪的最新"神仙美味"

各位吃瓜群众请注意,AI界的米其林三星大厨Meta又出新菜了!这道名为"辣妈3号"的神仙美味,不仅辣得够味,还要把你的味蕾辣出新境界!

1. 主厨的秘密武器

想象一下,辣妈3号就像是一位会8国语言、能写代码、会算数、还能当你小助手的超级保姆。她不仅能照顾一个幼儿园的熊孩子(8B版本),还能管理一个中型公司(70B版本),甚至能治理一个小国家(405B版本)!这位405B大姐能同时记住12.8万个"流言蜚语"(哦不,是上下文),简直是行走的百科全书+超级计算机!

2. 食材选择:只选最新鲜的!

辣妈3号的主厨们可是挑食材的老手:
- 他们从网上"海捞"了15万亿个词,比上一代多了近10倍!
- 这些词里,一半是日常生活的调料,四分之一是数学题和脑筋急转弯,近五分之一是程序员的咒语,剩下的是环游世界学来的各国方言。
- 他们还发明了一个超级除草机,把网上的垃圾、重复的、不健康的统统都筛了出去。

3. 烹饪过程:三步炒作法

第一步:"小火慢炖"- 先用普通灶台(8K上下文)煮个半熟。
第二步:"大火爆炒"- 换成超级灶台(逐步加到128K上下文),把汤汁收得又浓又香。
第三步:"温火收尾"- 最后用最好的食材轻轻一熬,这就是传说中的"退火"(连厨师自己都不知道为啥叫这名),让味道达到巅峰!

4. 厨房设备:顶配豪华版

- 16000个超级大功率电磁炉(H100 GPU)同时开火!
- 一个能装下半个太平洋的冰箱(240PB存储)!
- 比5G还快的专属配菜系统(NCCLX通信库)!

想象一下,这么多灶台同时开火,厨房里热得跟桑拿房似的。但我们的大厨们愣是顶着高温,54天里换了466次厨师服,才把这道菜给炒出来!

5. 调教方法:既要软萌可爱,又要知书达理

光会做菜不行,还得懂礼貌啊!于是我们的厨师们开始了漫长的"调教"过程:
- 先是来了一轮"温柔教育"(监督微调)
- 接着是"棒棒糖加大棒"战术(直接偏好优化)
- 最后还请来了道德模范(安全对齐)来指导

经过这番折腾,辣妈3号不仅会做菜,还会哄人、会编程、会算数、懂礼貌,简直就是十项全能!

6. 特色小菜:多才多艺显身手

别以为辣妈3号只会做饭,她还是个多才多艺的"女神":
- 看图说故事?小菜一碟!
- 看视频写影评?不在话下!
- 听歌识曲还能唱两句?简直就是KTV麦霸!

虽然这些"才艺"还在练习中,但已经有几分李白"朝如青丝暮成雪"的潜质了!

7. 实力派:考试成绩单亮瞎眼

辣妈3号参加了一系列"星厨争霸赛",成绩简直亮瞎眼:
- 大学入学考试(MMLU):87.3分(满分100)
- 程序员面试(HumanEval):89分(满分100)
- 奥数比赛(GSM8K):96.8分(满分100)
- 长篇小说阅读理解(QuALITY):95.2分(满分100)

这成绩单拿回家,恐怕连"虎妈"都要笑得合不拢嘴!

8. 安全第一:AI界的"保安队长"

Meta的厨师们深知"枪支弹药不能乱放"的道理。他们给辣妈3号配了个24小时保镖团队(Llama Guard),防止她不小心说错话或做错事。还时不时安排几次"道德考试",确保她不会变成"终结者"。

9. 开源盛宴:人人都能当大厨!

最厉害的是,Meta决定把这道"神仙美味"的菜谱完全公开!这就相当于米其林三星餐厅把招牌菜的制作方法放到了网上。以后谁想做,在家就能整一道!这波操作,不仅让其他大厨们惊掉了下巴,还让无数吃货们欢呼雀跃!

10. 未来展望:更上一层楼

Meta的厨师们可没闲着,他们已经在琢磨下一道"神仙美味"了:
- 也许是会跳舞的辣妈4号?
- 或者会画画的辣妈5号?
- 搞不好哪天就冒出个会写交响乐的辣妈6号!

总之,AI界的"米其林"之路,才刚刚开始呢!

结语

辣妈3号的诞生,不仅让Meta在AI界的江湖地位更上一层楼,还为整个AI研究界带来了一股新风。这碗"辣妈汤",不仅好喝,还给大家带来了无限的想象空间。AI的未来会怎样?让我们拭目以待,看看下一道"神仙美味"会是什么滋味!

———————————

下面才是中规中矩、非常忠实的另外几个LLM论文摘要,出自头部大模型,角度和风格略有不同,可以对比着快速浏览,获得辣妈3的基本信息 。我核对过,觉得比人类写得好。

【摘要2】

Meta最新推出的Llama 3系列基础模型,旨在支持多语言任务、编程、推理和工具使用,其旗舰模型拥有4050亿参数,上下文窗口可达128K tokens。本文将深入探讨Llama 3的模型架构、训练过程、性能评估、多模态扩展、数据处理与质量控制、复杂性管理以及安全性等方面。

模型架构与训练

Llama 3采用了标准的密集Transformer架构,而非混合专家模型,以最大化训练稳定性。模型训练分为三个主要阶段:

  1. 初始预训练:使用大规模多语言文本语料库进行预训练,模型在此阶段学习语言结构和大量知识。初始上下文长度为8K tokens。
  2. 长上下文预训练:逐步增加上下文长度至128K tokens,以支持更长的输入和输出。
  3. 退火:对高质量数据进行上采样,并采用Polyak平均化方法生成最终预训练模型。

数据处理与质量控制

Llama 3的数据处理过程极其严格,包括大规模文本语料库清理、去重和过滤,以确保数据的高质量。具体步骤包括:

  1. PII和安全过滤:过滤掉可能包含个人可识别信息(PII)和不安全内容的网站数据。
  2. 文本提取和清理:使用自定义解析器从HTML内容中提取高质量文本,并保留数学和代码内容的结构。
  3. 去重:在URL、文档和行级别上进行多轮去重。
  4. 启发式过滤:通过启发式方法去除低质量文档、异常值和重复内容。
  5. 模型质量过滤:使用模型分类器对数据进行进一步的质量筛选,确保训练语料库的高质量。

最终数据混合比例为:50%通用知识,25%数学和推理,17%代码,8%多语言内容。

模型评估与性能

Llama 3在多个基准数据集上进行了评估,包括多语言理解、编程、推理等任务。其旗舰模型在多个关键基准上的性能接近甚至超过了GPT-4。具体来说:

  • 多语言能力:支持至少八种语言的问答和指令执行。
  • 编程能力:能够编写高质量代码,并在HumanEval等编程基准上表现优异。
  • 推理能力:在ARC Challenge等推理基准上表现出色。
  • 长上下文任务:在某些任务上表现优异,如QuALITY达到95.2%。
  • 多语言任务(MGSM):与顶级模型持平。

多模态扩展

Llama 3还进行了多模态扩展实验,整合了图像、视频和语音的识别能力。这些模型通过组合方法进行训练,显示出与当前最先进模型在图像、视频和语音任务上的竞争力。

复杂性管理与安全性

Llama 3在设计上做出了一些简化以管理复杂性,如采用标准的密集Transformer模型架构,而非混合专家模型,从而最大限度地提高训练的稳定性。此外,Llama 3引入了Llama Guard 3模型,用于输入和输出的安全性过滤,并在训练过程中实施多轮安全措施,以确保模型的安全性和可靠性。

开源与未来发展

Llama 3的所有模型将在更新的Llama 3社区许可证下公开发布,包括预训练和后训练版本的4050亿参数语言模型。通过公开发布旗舰模型,Meta希望推动研究社区的创新,并加速通向人工通用智能(AGI)的负责任发展。

总结

Llama 3展示了在多语言、编程、推理和工具使用等方面的显著改进。通过优化数据质量和多样性、扩大训练规模以及简化模型架构,Llama 3在多个关键基准上达到了或接近了当前最先进模型的性能。Llama 3的公开发布将对AI研究社区产生深远影响,并加速AGI的负责任发展。

【摘要3】

引言

现代人工智能系统的基础是基础模型。本综述介绍了Meta推出的新一代基础模型集——Llama 3。Llama 3是一组支持多语言、编程、推理和工具使用的语言模型,其最大模型包含4050亿参数,并支持最多128K的上下文窗口。本论文对Llama 3进行了广泛的实证评估,结果显示其在多种任务上的性能与当前领先的语言模型如GPT-4相当。

模型架构与训练

Llama 3采用标准的稠密Transformer架构,并在数据质量和多样性以及训练规模上进行了显著改进。模型分为三个主要阶段进行训练:

  1. 初始预训练:使用大规模多语言文本语料库对模型进行预训练,模型在此阶段学习语言结构并获得大量知识。初始上下文长度为8K tokens。
  2. 长上下文预训练:在初始预训练后,通过逐步增加上下文长度进行进一步预训练,最终支持128K tokens的上下文窗口。
  3. 退火:在最后的退火阶段,对高质量数据进行上采样,并采用Polyak平均化方法生成最终预训练模型。

Llama 3在设计上做出了一些简化以管理复杂性,如采用标准的稠密Transformer模型架构,而非混合专家模型,从而最大限度地提高训练的稳定性。后训练阶段采用相对简单的程序:监督微调(SFT)、拒绝抽样(RS)和直接偏好优化(DPO)。

模型评估与性能

Llama 3的模型被评估在多个基准数据集上,包括多语言理解、编程、推理等任务。其旗舰模型在多个关键基准上的性能接近甚至超过了GPT-4,具体表现如下:

  • 多语言能力:Llama 3支持至少八种语言的问答和指令执行,包括英语、德语、法语、意大利语、葡萄牙语、印地语、西班牙语和泰语。
  • 编程能力:能够编写高质量代码,并在HumanEval等编程基准上表现优异。
  • 推理能力:在ARC Challenge等推理基准上表现出色。
  • 长上下文任务:在某些任务上表现优异,如QuALITY达到95.2%。
  • 多语言任务:在MGSM测试中,405B模型达到91.6%,与顶级模型持平。

Llama 3还进行了多模态扩展实验,整合了图像、视频和语音的识别能力。这些模型通过组合方法进行训练,显示出与当前最先进模型在图像、视频和语音任务上的竞争力。

数据处理与质量控制

Llama 3的数据处理过程包括大规模的文本语料库清理、去重和过滤,以确保数据的高质量。具体步骤包括:

  1. PII和安全过滤:过滤掉可能包含个人可识别信息(PII)和不安全内容的网站数据。
  2. 文本提取和清理:使用自定义解析器从HTML内容中提取高质量文本,并保留数学和代码内容的结构。
  3. 去重:在URL、文档和行级别上进行多轮去重。
  4. 启发式过滤:通过启发式方法去除低质量文档、异常值和重复内容。
  5. 模型质量过滤:使用模型分类器对数据进行进一步的质量筛选,确保训练语料库的高质量。

复杂性管理

Llama 3在设计上做出了一些简化以管理复杂性,如采用标准的稠密Transformer模型架构,而非混合专家模型,从而最大限度地提高训练的稳定性。通过优化数据质量和多样性、扩大训练规模以及简化模型架构,Llama 3在多个关键基准上达到了或接近了当前最先进模型的性能。

安全性

Llama 3引入了Llama Guard 3模型,用于输入和输出的安全性过滤。该模型在训练过程中实施多轮安全措施,以确保模型的安全性和可靠性。

训练基础设施

Llama 3的训练在Meta的生产集群上进行,使用多达16,000个H100 GPUs,每个GPU具有80GB的高带宽内存,功率为700W。存储系统使用Tectonic分布式文件系统,提供240PB的存储空间,支持每秒2TB的持续数据处理能力,峰值可达7TB/秒。网络方面,使用RDMA over Converged Ethernet (RoCE) 和Nvidia Quantum2 Infiniband fabric。

后训练与对齐

Llama 3的后训练和对齐过程包括:

  1. 监督微调(SFT):使用人类反馈数据训练奖励模型,然后用于指导模型生成。
  2. 直接偏好优化(DPO):进一步对齐模型,使其更符合人类偏好。

通过多轮迭代,不断改进模型的指令遵循能力和安全性。

社区贡献与未来发展

Llama 3的所有模型将在更新的Llama 3社区许可证下公开发布,包括预训练和后训练版本的4050亿参数语言模型。通过公开发布旗舰模型,Meta希望推动研究社区的创新,并加速通向人工通用智能(AGI)的负责任发展。

结论

总体而言,Llama 3展示了在多语言、编程、推理和工具使用等方面的显著改进。通过优化数据质量和多样性、扩大训练规模以及简化模型架构,Llama 3在多个关键基准上达到了或接近了当前最先进模型的性能。通过公开发布如此大规模的模型,Meta展现了其推动开放、负责任的AI发展的决心,这可能会对整个AI领域产生深远影响。

【摘要4】

本文详细介绍了Meta推出的基础语言模型Llama 3。Llama 3是一组支持多语言、编程、推理和工具使用等多种任务的模型,其最大模型具有405B参数和128K token的上下文窗口。

Llama 3的开发基于数据、规模和复杂性管理三大关键支柱。在数据方面,其预训练数据量达约15T多语言tokens,远超Llama 2的1.8T tokens,并进行了严格的数据清洗和质量控制,包括PII和安全过滤、文本提取和清理、多轮去重、启发式过滤以及模型质量过滤等,数据混合比例为约50%通用知识、25%数学和推理、17%代码、8%多语言内容。

在规模上,旗舰语言模型使用3.8 × 10^25 FLOPs进行预训练,规模远大于以前的Llama模型。在管理复杂性方面,采用标准的密集Transformer架构和相对简单的后训练程序,如监督微调(SFT)、拒绝采样(RS)和直接偏好优化(DPO),以最大化训练稳定性。

Llama 3在多个基准数据集上进行了广泛评估,包括多语言理解、编程、推理等任务。其旗舰模型在许多任务上的表现与GPT - 4等领先语言模型相当或接近,在帮助性和无害性之间取得了更好的平衡。例如,在MMLU、HumanEval、GSM8K、QuALITY、MGSM等基准测试中表现出色。

此外,Llama 3还进行了多模态扩展实验,通过组合方法整合图像、视频和语音的识别能力。图像编码器在大量图像 - 文本对上训练,语音模块进行了自训练并与语言模型对齐,视频 - 文本数据也进行了对齐实验。这些模型仍在积极开发中,尚未准备好发布。

Llama 3的训练基础设施强大,使用多达16K个H100 GPUs,采用4D并行策略和专门的网络拓扑及负载均衡策略。训练过程中面临硬件挑战,但通过自动化系统和专门工具解决。在预训练和后训练阶段都实施了严格的安全措施,开发了Llama Guard系统级安全解决方案,并进行了广泛的红队测试和风险评估。

Meta公开发布了Llama 3模型,包括405B参数版本,这可能会推动AI研究社区的创新,加速负责任的AGI开发。未来可能会进一步提升模型能力,如改进多模态集成和扩展上下文长度,同时持续改进数据质量和模型规模。总的来说,Llama 3强调了数据质量、计算规模和复杂性管理在开发高质量基础模型中的重要性,其算法策略求稳务实,通过一丝不苟的数据收集清洗工作保证了模型质量。

 

What did Ilya see? -- secret behind success of LLMs

What did Ilya see?

-- looking closely into his historical Berkeley talk

by Wei Li, Jia Gao

Introduction

When Ilya Sutskever left OpenAI and re-emerged with his new company, SSI (Safe Superintelligence Inc.), the move was both surprising and expected—he bypassed AGI and directly aimed at SSI (Safe Superintelligence). He confidently declared: Superintelligence is imminent, and establishing safe superintelligence (SSI) is the most important technological issue of our time.

Ilya, a legend in the field of deep learning and AI, and the former true soul of OpenAI, was at the center of the dramatic internal shift, addressing the issue—effective acceleration versus super alignment. Why was Ilya so steadfast about "super alignment" amid the underlying AI values and strategic path debate? Even after the storm settled, the outside world continued to speculate: what did Ilya see that compelled him to join the board in making the decision to oust CEO Sam Altman? Ilya remained hidden until recently, when he left OpenAI, leading to the dissolution of his super alignment team and the creation of his new company.

What did he see behind the push for "safe intelligence"?

Back on October 3, 2023, Ilya gave a talk at UC Berkeley titled "A Theory of Unsupervised Learning." Though obscure and known to few, it is destined to be one of the most significant moments in AI history. This talk was a theoretical reflection and summary by a top expert in deep learning on the GPT model he pioneered, now famous worldwide. Ilya revealed the core principles of large models and vividly described his obsession with, and excitement over, independently understanding the mechanisms of unsupervised learning. Despite the complexity, the talk was brilliant and enlightening.

Until recently, Leopold Aschenbrenner, a former member of his super alignment team, published a 165-page article, "Situation Awareness," preliminarily revealing the shock and concerns within OpenAI over the exponential evolution of GPT models. This partly answered the question of what Ilya saw, but Ilya himself remained silent until his official re-emergence not long ago.

Reflecting on his "confessional" talk at Berkeley, we might glimpse his "moment of enlightenment" when facing potential superintelligence and understand his original intent for safe intelligence. It was a rare deep sharing by Ilya, attempting to convey essential message to the world. But did the world hear him?

1. Machine Learning: Supervised Learning and Unsupervised Learning

To accommodate readers with varying mathematical backgrounds, this blog aims to explain Ilya's historical presentation in an accessible language. Purely technical explanations can be skipped by non-technical readers without affecting the understanding of the presentation's main ideas.

Before diving in, let's review the basic concepts of machine learning. Machine learning is like having computers as students and humans as teachers. By providing computers with numerous "practice problems" and "answer keys," they slowly learn to solve problems. This is supervised learning. But can computers really learn from practice problems instead of merely memorizing them? Ilya assures us there's theoretical proof of this.

Imagine a sea of problems before you, each paired with a standard answer. This is the model's training data. Model training is like diligently solving these problems until most of them are correct, meaning low training error. But even an extensive problem set has its limits. When new problems arise, can the model still get them right? These new problems are the test data, akin to exams. Whether the model performs well depends on its test error rate.

Mathematics tells us that as long as the problem set is large enough, far exceeding the model's size, excellent performance on training problems (low training error) ensures good performance on test problems (low testing error). In other words, if the model trains well, it will do well in exams! This is the mathematical guarantee for supervised learning.

However, if the model merely memorizes without extraction, no matter how large its memory or how strong its "memory power," it lacks real adaptive learning ability (called "generalization ability"). Only when the model isn't too smart, it will be forced to extract the essence (called "compression"), learning real skills from the problem set.

This explains why the model size shouldn't be too large, to avoid giving the model too much room to cut corners. In short, Ilya wants to say that "big labeled data + low training error" is the winning formula for supervised learning, guaranteed by mathematics. This point has been confirmed both theoretically and practically. Since the deep learning revolution 12 years ago, countless successful cases have shown that as long as the training data is sufficient, neural networks can excel, at all sorts of AI tasks, from recognizing cats and dogs to machine translation.

But what about unsupervised learning? Can computers learn intelligence from a problem set without standard answers? It sounds far-fetched, but Ilya is about to explain how he managed to seek a solid mathematical foundation for unsupervised learning as well.

2. Distribution Matching: A New Approach to Unsupervised Learning

Everyone knows that machine translation was a typical win of supervised learning, in fact, the only win among various NLP tasks (such as dialogue, information extraction, sentiment analysis,  question answering, docuent understanding, etc.) prior to the large language model's era. Why? Because we have a vast amount of historical bilingual data. It's like students having workbooks with English on the left and Chinese on the right—supervised learning thrives on this setup.

But what if the teacher suddenly stops providing aligned bilingual data and only gives you English books and unrelated Chinese books, leaving you to figure out how to align and learn automatic translation? That's the problem unsupervised learning needs to solve. Ilya says unsupervised learning can also handle various language machine translations (which we've seen today with large models—specialized translation software is no longer needed), and even any input-to-output transformation tasks. What's the catch?

Ilya discovered a new approach called distribution matching. Essentially, if the English and Chinese book collections are large enough, containing various sentence structures, their linguistic regularities will be learned "without supervision". For example, the context distribution of "I/me/my" in English should correspond to "我" in Chinese; adjectives near nouns in English with semantic compatibility should have a similar pattern in Chinese, etc. This provides the basic condition for potential language alignment.

Ilya points out that if two languages' native data is sufficiently rich, the input in one language can almost uniquely determine the equivalent translation in the other language. This principle applies not only to machine translation but also to tasks like speech recognition and image recognition.

Ilya independently discovered this approach in 2015, fascinated by the underlying mathematical principle—compression theory. If we can find a method that maximally compresses both English and Chinese data, this approach will capture the common patterns of the two languages, which form the basis of translation.

So, Ilya proposes that unsupervised learning is essentially about finding the optimal data compression method. This perspective not only sounds cool but also provides a mathematical explanation for the effectiveness of unsupervised learning. Although real-world tasks are not idealized, this principle gives unsupervised learning a solid theoretical foundation, making it as convincing as supervised learning.

Next, Ilya will delve deeper into the mathematical principles behind it. Although somewhat abstract, he promises it’s full of insights. We'll see how he uses the magic of compression to explain the mysteries of unsupervised learning.

3. Ilya’s Ultimate Theory: From Conditional Modeling to Joint Modeling

This is the final and most intriguing slide of Ilya's talk, worthy of thorough analysis and contemplation. The goal of unsupervised learning is often defined as "learning the internal structure of data." Ilya suggests understanding unsupervised learning from the perspective of data compression: a good unsupervised learning algorithm should maximally compress the data, representing its content in the simplest form. This introduces the concept of Kolmogorov complexity.

The Kolmogorov complexity of a data object is the length of the shortest computer program that can fully describe this object. You can imagine this shortest program as a "compressed package" containing all the information needed to reconstruct the original data. From this perspective, the goal of unsupervised learning is to find the optimal compressed representation of the data, which is the Kolmogorov complexity.

The Kolmogorov complexity of a data object is the length of the shortest computer program that can fully describe this object. Imagine this shortest program as a "compressed package" containing all the information needed to reconstruct the original data. From this perspective, the goal of unsupervised learning is to find the optimal compressed representation of the data, which is the Kolmogorov complexity.

However, in practice, we often need to handle multiple related datasets. For instance, in machine translation, we have the source language dataset X and the target language dataset Y. We want to learn a model that can translate sentences from X to Y (or vice versa). Traditionally, this is viewed as a conditional probability problem: given X, what is the probability distribution of Y? Represented in terms of Kolmogorov complexity, this involves finding K(Y|X), the shortest description length of Y given X.

Ilya proposes a different approach. Instead of viewing X and Y as condition and result, like in supervised learning, he suggests viewing them as a whole and compressing them together within a massive model. Essentially, we seek the joint Kolmogorov complexity K(X,Y), the shortest program length that compresses both X and Y simultaneously. This approach must fully utilize the correlation between X and Y, using information in X to automatically align Y (or vice versa), much like how we use our native language knowledge to understand and remember foreign language expressions.

Ilya believes this joint compression idea is the true power of unsupervised learning. Real-world data is often interconnected, with numerous deep common patterns and regularities. If unsupervised learning can discover and utilize these regularities, it can significantly enhance learning efficiency and generalization ability. This explains the remarkable performance of large language models like GPT across various tasks: through massive unsupervised pretraining, they learn the deep regularities of the training data, and these regularities are transferable across related datasets.

Although Kolmogorov complexity is theoretically uncomputable, Ilya believes we can approximate this process using deep neural networks (like GPT). Through optimization algorithms such as gradient descent, neural networks can find the optimal compressed representation in massive data, capturing the essence of the data and its alignment patterns, even if not strictly in terms of Kolmogorov complexity.

Thus, Ilya’s theory can be seen as a new paradigm for unsupervised learning, elevating traditional independent modeling (like separate models for English and Chinese) to a unified associative modeling approach. In this paradigm, the goal of unsupervised learning is no longer just compressing individual datasets but finding the connections between them. This cross-modality learning represents an advanced form of artificial general intelligence (AGI).

Now, let’s closely examine this final slide. In it, X represents dataset 1 and Y represents dataset 2. The key point is extracting every bit of information from X (or Y) to help predict Y (or X). This is what Ilya refers to when he says training X and Y together yields the effect that unsupervised learning of X helps accomplish the task of transforming X to Y.

The crucial idea is: K(Y|X) becomes K(X, Y).

Ilya transforms the universally applicable functional AI task of "input X conditions output Y" into an approximate solving problem by jointly training X and Y without modal segmentation. This joint training approach is effectively the current multimodal unified training, abbreviated as K(X, Y).

Ilya aims to strengthen the theoretical basis, emphasizing his surprising discovery that self-learning of X has a strong predictive effect on Y.

The essence of unsupervised self-learning is that the self-learning of X is to compress X, and the self-learning of Y is to compress Y. This is straightforward because the essence of self-learning is involves only positive examples, without negative samples. Unsupervised self-learning lacks a specific task orientation; it learns language from language, images from images, music from music, and so on, continually abstracting various patterns from phenomena.

Ilya points out in the slide: conditioning on a dataset, not an example. The compression object is the dataset, not individual data points, which is crucial. This distinction separates superficial compression from content compression. Superficial compression is merely a mechanical process that does not produce intelligence. Only content compression can achieve artificial intelligence.

How do we understand the difference and connection between superficial lossless compression (e.g., digital music) and content lossless compression (e.g., Suno)? Compressing a specific song losslessly aims to ensure it can be restored to its original musical form (including noise and imperfections). This is traditional music compression, targeting individual sample, e.g., a specific song. Compressing a collection of music, whether using GPT or Diffusion, targets a group of samples, resulting in a large model like Suno.

When individual objects turn into group objects, formal compression naturally transforms into content compression. This is because, although the group comprises individuals, compressing the group is like "painting" a portrait of the group, outlining its characteristics. It may resemble an individual, but it is not a specific individual in the original data; otherwise, it would not be a model but a memory repository.

This is understandable because the purpose of large model compression is to identify the characteristics and regularities of the dataset. The text generated by GPT-4 might seem familiar; the music generated by Suno might sound familiar; the videos generated by Sora might look familiar; the images generated by MJ might seem familiar. However, they are virtual individuals "restored" based on prompts, abstracted or compressed from big data: derived from data, higher than data, mingling with data, indistinguishable from real and fake.

Given that the compression object is the entire dataset content, how do we measure its effectiveness after decompression? What is the gold standard?

This standard is each sample itself. However, this is not entirely accurate; the standard could have equivalent answers, as the same content can have various ways of expressions. The implementation method is "masking", and NTP simply masks the next token. Training involves calculating the loss for each sample, using backpropagation with gradient descent to adjust parameters continually, eventually lowering the loss in the group training of the dataset to an acceptable point, forming the large model.

This final slide and Ilya’s explanation emphasize a core point: Conditional Kolmogorov complexity K(Y|X) provides a theoretically optimal solution for unsupervised learning. K(Y|X) is defined as the length of the shortest program that produces the output dataset Y given access to the input dataset X. It represents the theoretical limit of extracting all valuable information from X to predict Y. An algorithm that can achieve K(Y|X) would be the best for predicting Y using unlabeled data X.

This can be seen as the theoretical basis for large models performing various language translations. Each language is potentially X and potentially Y. After self-learning with an huge amount of data, LLMs learn the relationships between languages, possessing the potential to translate from X to Y.

In practice, the machine translation task, like other tasks, initially involves few-shot examples in instruction-following fine-tuning to define the task, ultimately triggering the internal power of large models to translate various languages. This internal power of unsupervised learning for various tasks is the theme of his talk.

However, K(Y|X) is uncomputable in practice. Ilya proposes a feasible alternative, using joint Kolmogorov complexity K(X,Y) (joint compression of X and Y). He believes K(X,Y) can achieve the same effect as K(Y|X) in practical machine learning tasks.

Let us stop and think again: conditional modeling is now replaced by sequence modeling by Ilya. The widely known probability simplification in traditional machine learning, such as the Markov chain, has a similar effect.

Conclusion

Ilya's historic presentation at Berkeley on the theory of unsupervised learning reveals the secret behind the mainstream of self-learning large models, especially GPT. It seems that Ilya, after long contemplation, finally disclosed this "heavenly secret" in a cryptic manner at Berkeley. Although the theory and its proof appear complex, it is crucial for understanding why GPT's sequence learning method ("next token prediction") has become a universal simulator for AI tasks.

Ilya exudes a genius prophet aura, with a lonely invincibility and high-altitude isolation, blending a sense of deep realization, compassion, and the pure, focused, and idealistic earnestness of a graduate student nerd.

He claims to prefer compression but does not emphasize so-called lossless compression. He leaves room for himself and the mainstream, proposing the concept of "no regret"—though GPT may not achieve lossless or perfect compression, it theoretically proves there is no better way: GPT is the closest to lossless, "no-regret" modeling.

When Ilya officially re-emerges to establish SSI, emphasizing a single focus, a single goal, and a single product—to use technology to ensure the superintelligence brought by large models is safe for humanity—he asserts: AI will be eternal, its birth akin to the creation of heaven and earth. As Ilya passionately discusses AI's progress, he is most qualified to declare and lead the "exciting yet dangerous journey towards AGI."

References

Chinese full-length post

Ilya's presentation:

Related Links:

https://www.youtube.com/live/AKMuA_TVz3A?si=7dVfcBUv3rHBHtyT

https://situational-awareness.ai/

Ilya Sutskever:你会 bet against AI 吗?

挥刀董事会,OpenAI 如何解决致命难题

英伟达GTC的一小步,AI进化史的一大步

为什么说 Sora 是世界的模拟器?

世界语国,朋友遍天下(视频修订版)

朋友遍天下

 

可灵AI视觉大模型重现1987年北京世界语春节晚会联欢聚会的场面,当时正谈恋爱,带着领导(当时是女友)去见圈子里老朋友(右边是大百科全书的郑伯承,非常有才华的老友),背后是春节猜谜的谜语。复原的真切程度,难以置信,包括神态、微表情和动作。。

除了已经死去的语言,语言的地理分布不难确认。可世界语国(Esperantio)在哪里?世界语者(Esperantistoj)会很自豪地告诉你:nenie kaj chie (哪里都没有,可又无所不在). Esperantio estas tie kie estas Esperantistoj. (哪里有世界语者,哪里就成为世界语国。)

世界语的大同理念很像是国际歌的传奇,这让我想起我们从小就熟读的列宁语录:“一个有觉悟的工人,不管他来到哪个国家,不管命运把他抛到哪里,不管他怎样感到自己是异邦人,言语不通,举目无亲,远离祖国,——他都可以凭《国际歌》的熟悉的曲调,给自己找到同志和朋友。”世界语对于世界语者,就如上个世纪早期的《国际歌》对于工人革命者一样,成为联络同志的桥梁。正是这样,我凭着“爱斯不难读”(Esperanto)这共同的语言,从英伦而加拿大,再到美国,每到一处,总能找到同志。

世界语者总人数并不多,约三千万,但分布极广,世界上差不多每个中等以上的城市都有或大或小的世界语俱乐部。所有的世界语俱乐部对待远道而来的世界语者就象自己的亲人一样,对此我深有体会。

英国曼城是我出国留学的第一站。跟很多人一样,第一次远离故国,伴随着难以名状的痛苦,内心空荡而恍惚。百无聊赖,我于是找来电话黄页,查询 Esperanto, 果然发现有联络人,原来是一帮退休老人组成的俱乐部,每周在 Pub(酒馆) 活动一次。他们很高兴,我的加入给他们带来了新奇。

于是每个周末,他们派人来车接我送我。这是我第一次接触英国的 Pub 文化。刚开始对 Pub 不是很习惯,里面闹哄哄的,硕大的屏幕上总是播放着足球赛事,有人打台球,有人玩游戏,更多的人在那里喝着啤酒高谈阔论。英国人对 Pub 的热衷,超出我的想像,有人每天傍晚来这里泡到后半夜,海量的啤酒入肚,满面通红,谈些不知所云的话题。以酒会友,人生几何。

我们这个俱乐部通常是找一个相对安静的小角落里围坐在一起。具体谈些什么记不得了,可那种温馨的气氛给我短暂而孤寂的留英生活,留下了回味。

在英国尝到了寻找世界语“同志”的甜头,到了温哥华第二天,就打开黄页,果然联系上了一位老世界语者J,德国人,极为彬彬有礼,和蔼热情。温哥华的五年,他成为我来往最密切的忘年之交。有次我在系里讲演“世界语的机器处理”,他象个记者一样扛来他的老式摄像机, 跑前跑后,给我录像,使我们系里的教授同学亲眼见到世界语者的热情。

立委与世界语者朋友在温哥华合影(J先生居中)

温哥华世界语俱乐部当时还有一批电话公司的白人小伙子,长的都很精神,听说来了一个如假包换的中国世界语者, 都很兴奋。相约聚餐后,他们诚邀我周末跟他们一起滑雪去。我当时刚来,功课很紧,可是盛情难却,还是豁出去一天跟他们去了。这是我第一次滑雪,尽管老摔跤,感觉新鲜美好。我以前从来没有置身过这样的环境,松树白雪,笑语喧哗,各类雪衣,色彩缤纷,真是天上人间。

滑雪过后,我们来到其中一位的女朋友家吃晚饭。女主人年轻漂亮,热情爽朗,给我们煮了一锅大杂烩。她的房子在山腰上,后院对着风景无限的大峡谷。尽管天气寒冷,大家还是愿意在室外,一边喝啤酒,一边欣赏景色。在英国灰蒙蒙雨蒙蒙地度过一年,置身温哥华这样有气势的自然景观,如入仙境。餐后大家围坐一起看美国卡通《Simpsons》的录象,女主人挨着我坐,很体谅地说:你刚来,可能不懂里面的文化和幽默,我来给你做点讲解。于是她热情可掬地在我耳边嘟嘟囔囔,我根本没听明白,只是胡乱点头。她看我点头,孺子可教,更加热情有加。这是一次愉快又有点透不过气来的经历。身边一个金发美女,殷勤热情,差不多靠在我身上了,耳边是她的带有热气的喃喃细语。

以后的每个周末,我们俱乐部会面聚餐一次,总是J老先生牵头,五年下来,从不间断。这是一段非常愉快的时光。值得一提的是,在温哥华俱乐部里,我曾经巧遇多年没有联系的老友郑伯承。伯承兄是北京世界语协会秘书长,当年曾是我世界语班上的同学。他来温哥华开国际艾滋病大会并宣读论文,事先跟J老先生有联络。J老先生通知我说,下次聚会我们欢迎来自北京的朋友,我还在琢磨是谁呢,心里想,如果是北京的老世界语者,八成我早就认识了,没想到来的是伯承兄。老朋友相见分外亲切,我们欢聚一堂,相叙甚欢。

1987年春立委携夫人(当时是女友)参加世界语者聚会,与伯承兄合影

记于2004年五月28,美国水牛城

 

《朝华午拾》电子版目录

Suno:《立委:Esperanto: Al Nia Kara Lingvo(世界语之恋)》

立委两分钟:大模型时代的痛

我是数字主播小雅,今天《立委两分钟》节目的话题是大模型时代的痛。目前在上海召开的人工智能大会,也提到了这个话题。

这次不谈技术,也不科普,专门谈谈大模型时代带来的问题,主要是两点,一个是真假莫辨;另一个是被模型吃掉的工作岗位。都是沉重的话题,但都是进行时,问题只会越来越严重。

生成式AI的到来,也就意味着真假莫辨时代的到来。现如今,无论文字、音频还是视频,随着大模型的普及,深度造假(deep fake)的门槛无限低。耳听为虚,眼见也一样可能是虚。有什么信息可以相信呢?社会似乎并没有为此做好准备。

更闹心的是就业市场受到的冲击。几乎所有的白领工作,都面临大模型的挑战。AI结合机器人(所谓具身智能)的方向也在加速度发展,再过几年,蓝领的工作也面临类似的挑战。

最近几个月在美国,身边不少年轻人失业了,有好几个是伯克利data science毕业的学生。一方面AI大厂如英伟达、苹果、微软等市值不断攀升,另一方面包括大厂在内的IT裁员风潮也一波接着一波。不是 hard core engineering 的数据科学专业生,冲击很大,所以很多年轻人对大模型有怨气。这才刚刚开始。名校毕业生也一样两极分化。非常可怕的AI碾压人工的潮流,data science 是重灾区。

曾几何时,在大模型之前,全美国都有预见,说未来最大的需求是数据科学,每个企业、产品,无论大小,都对数据工作有极大的需求。于是,各大学都开始增加 ds 的 programs,校内极速扩张这个介于电脑和统计之间的学士、硕士课程,各种网上的课程和学位也满天飞。结果,大模型来了,数据分析、总结和渲染都比人做得既快又好。非常可怕的AI碾压人工的潮流,data science 是重灾区。

美国做教授的老友说,数据科学杀掉了统计,人工智能杀掉了数据科学。现在高等教育里还有一个潮流,叫micro credentials, 各种证书。大量的half baked potatoes 就会抢占工作市场,真正全方位科班出身的人反而找不到工作。这些拿了证书的技工把工作都占了,是不是对正式毕业生不公平? 学生怎样会有动力学习?各种研究生教育有什么用?

问题尖锐,但不好回答。局面只会越来越糟。人类的心理承受、社会的应对,都是慢了几拍的。但技术却滚滚向前。

大模型来了后,很多教育其实是不必要的,传统教育的知识传授已经没有多少实质意义了,因为知识是招之即来,问之即答。大模型做辅导,比多数讲师教授还强。其实,教育和学校更大的功能现在就是一个蓄水池,稳住那些无业也找不到工作的人,推迟进入职场,给人一种虚假的对未来的希望。

国内也是同样。很多本科毕业生开始读硕士,不少硕士开始读博士,不是因为不断高读在他们的人生规划中,而是无法面对毕业失业的现实,不得不用升造来逃避。但水涨船高,职场的要求越来越高,而职位却不增反减。核物理博士毕业进街道办,博士出来做辅警,汉语毕业出来扫大街,诸如此类,恶性循环。

AI 也许真应该缓行。但没人能阻挡技术的内卷外卷。人类跳不出自己的怪圈。以职业生涯(career development)作为指挥棒的社会传统意识和价值观,必须改变,但改变需要很长的时间和配套的机制,这些都看不到任何靠谱的方案和行动。

处在漩涡中心的大模型产业也卷得惨烈。上一次移动技术革命在衣食住行、娱乐通信这些刚需领域已经做到了 极致,出现了一批超级应用,美团、滴滴、微信、抖音、拼多多等。剩下的知识和艺术工作属于高端需求,写文章、作曲儿、画个画、做个视频、用个秘书,大多是锦上添花。也许有一天这些高端需求会成为刚需,但在目前很像是伪需求,因此规模化落地应用就显得格外困难。

现在被认可的真需求是什么?有企业老板说得好:你显示给我看,用了你的AI,我可以裁员多少?那才是刚需。

问题是,通用大模型可以免费或白菜价来使用,根本没有多少大模型创业的机会。而大模型的降本增效所引起的裁员已经开始。只有那些特别定制的AI,能够帮助企业进一步裁员,才有活路。

最讽刺的是,以前被尊为打工贵族的码农,也首当其冲。在横扫了 data science 工作后,cs 的毕业生中比较平庸的,也在工作被铲除的路上。美国作为IT超级大国,多少年来在培养程序员方面一直有缺口,本国毕业的cs学生填不满,只好大量留用印度、中国和其他国家的毕业生。这样的好日子,也快到头了。

我不认同一个流行观念,说以前的技术革命消灭了一些工作,也创造了新的就业,所以大可放心,这次也会如此。时代不同了,大概率不会如此。必须认清消灭的岗位比创造的岗位,会多得多。虽然美团外卖小哥、滴滴司机是上一次移动平台技术扫荡了实体店以后产生的新工作,但看看这些最低工资的工作岗位的内卷情况就知道,就连这些工作也是僧多粥少,大家都在挣扎。人工智能的 Robo Taxi 的到来,会逐步消灭滴滴司机。
生产效率的提高一定伴有大量的失业,产业机构改变,也有失业。很多这种失业人员,基本没有希望东山再起。所谓结构性失业,大批中年人,只能等退休。除非奇迹发生,年轻人的就业问题也越来越严峻。人类必须改变和面对的是,不工作是常态,UBI 必须建设。

UBI制度(Universal Basic Income,全民基本收入)必须提上日程,这是因为生产力和GPT并不因为就业人员的急剧萎缩而降低,反而得益于技术革命而在稳步或快速增长中。两极分化必须遏制,必须防止技术革命的红利为少数人独占。否则国将不国,球将不球,人类将非人类。

我是小雅,上面播报的是《立委两分钟》的有角度的沉重思考,谢谢聆听,咱们下次再见。

立委两分钟:大模型到底有没有逻辑推理能力?

我是大模型数字主播小雅,今天播报的题目是立委两分钟最新的话题《大模型到底有没有逻辑推理能力?》。

这是最近一篇论文的题目。ICML 2024:“Case-Based or Rule-Based: How Do Transformers Do the Math?” 第一作者为来自北京大学物理学院、即将加入人工智能研究院读博的胡逸。有点意思。

论文第一个结论是,llm 学不到真正的推理规则,只能通过相似案例学到有限泛化。人学到的推理规则是可以 外推的(extrapolation) , 而相似案例的有限泛化却只能做内插(intrapolation)。无法外推,所以训练集没有类似案例的区域就成了盲区。
这个实验是在 gpt2 上做的。可以理解为什么第一个实验必须在 gpt 2 上做,因为他可以控制训练数据,制造数据真空,来验证有没有逻辑推理带来的外推能力。但这种GPT2这样规模的“大”模型没有出现的能力,并不表明真正的大模型就不会“涌现”。

论文后来的实验是在比GPT2大得多的 “辣妈” 上做的,似乎得出一个相左的结论。结论是,如果模型足够大,只需要少量的任务微调,llm 就学会了接近规则推理的效果。在长整数加法上的表现,表明模型不仅会内插,在外推上也表现很好。

非常有趣的研究,实验设计也不错,但论文貌似同时发表了两个矛盾的结论。

谈谈我的看法。从序列学习的方式上看,数据驱动的模型学习是以 case based 的归纳(也叫压缩)作为起点和主干的,这个没有疑问。问题是,case based 的学习,到了一定的程度和量级的时候,是不是会非常逼近 rule-based 的学习。承认后者就是承认了大模型具有某种逻辑推理能力。大模型具有初步的逻辑推理能力这一点在大模型主流社区中本来不是问题,而是默契的共识,大模型测试的一个重要维度就是逻辑推理能力。但在更大的范围内(非主流圈子以及普罗大众),一直还是作为疑问存在的。

一个有意义的视角是看泛化中外推的理解。对于非解析的、没有对应符号规则的现象,外推本质上是不可计算的,也就是只能碰运气了。出路只有收集相关数据,把盲区带入雷达屏,化外推为内插。但是对于有解析解的高度规则化的数据分布,外推能力是泛化学习的自然期望,达不到期望就说明llm只是一个鹦鹉。达到了期望, 就说明 llm 跳过了鹦鹉的门槛,学会了某种推理规则。现在看来,头部大模型是跨越了这个门槛,继续拿鹦鹉学舌来比况大模型,彰显的是人类盲目的狂妄自大。
要摈弃削足适履的思维定式。只要模型展现出符号规则类似的推理逼近的能力,就应该承认它学会了初步的推理。更本质的,它融会贯通,对于规律现象,可以达到外推的能力。其实,小语种之间的机器翻译能力,就是外推的结果,因为训练数据中严重缺乏相关的数据。
前不久引起关注的一项关于KAN模型的研究中,KAN 的 AI for science 实验,其实已经展示了模型如何数据驱动去逼近解析解,等于是把模型学习逻辑推理的内部过程图示化了,非常生动 ,有相当的说服力。当然,KAN的实验表明对于简单的解析解,数据驱动可以逼近符号规则,但并不轻易就得出符号规则。实验中是加入了人为的剪枝等操作才得出了数据背后的符号规则。

与此对照,深度学习大佬杨立昆却坚决否认GPT有逻辑推理能力。杨立昆语录: AGI is a complete nonsense;GPT is a deadend,等等。矫枉过正反潮流,把话说死,并不是坏事。但轻信他,也可能就被带进沟里去了。

诚然,大模型序列学习的本性是在序列上再现数据的丝滑和顺畅,所以擅长语言生成类任务。序列学习不擅长逻辑推理,尤其是深度、复杂的逻辑推理过程。但完全否定大模型具有初步的逻辑推理能力却是走到了另一个极端,也不符合我们所看到的事实表现。

我是小雅,以上就是今天的立委两分钟的播报,希望对您有启发,谢谢聆听,咱们下次再见。

立委NLP频道