The Turbulent Second Chapter of Large Language Models: Has Scaling Stalled?

The recent Chinese podcast from Guangmi's quarterly report on large language models, discussing the "scaling paradigm shift" toward AGI (Artificial General Intelligence), is well worth a listen. It touches on many key topics related to the AI industry landscape, offering a unique perspective and style.

The term "paradigm shift" may sound a bit dramatic, but as a seasoned analyst, Guangmi uses it to describe the current turbulent landscape accurately. While the AI arms race among industry giants is still in full swing, real-world scalable applications of these models are struggling to materialize. The question of how to justify investments has become a significant pressure point, or perhaps even a looming bubble.

Let's revisit some AI basics. There are three main types of learning in LLMs (Large Language Models):

(i) supervised learning;
(ii) unsupervised learning (self-learning/pre-training); and
(iii) reinforcement learning (RL, self-play/post-training).

Ilya has emphasized the importance of RL in exploring new directions for LLMs. Guangmi's podcast highlights RL as the pathway to the paradigm shift in AGI through large models.

Historically, two key milestones in RL have stood out: AlphaZero's victory over human Go players, which shocked the world, and RLHF (Reinforcement Learning from Human Feedback), which aligned models with human preferences and paved the way for ChatGPT’s explosive growth.

Currently, discussions revolve around the potential of a new RL-driven ecosystem for large models (though there's no broad consensus—it's primarily a conversation within small Silicon Valley circles) and the emerging trends in the "arms race" of large models. Here’s the context:

1. Pre-training scaling seems to have hit a bottleneck, with GPT-5 still unreleased;
2. The overall momentum of the arms race remains unchanged among the major players (the billionaire clubs/giants);
3. Key tech figures are proposing new roadmaps or trying to construct new scaling laws to continue the AGI journey.

Guangmi closely monitors trends in Silicon Valley. His small team conducts in-depth research in the Bay Area and has established extensive contacts. Having chatted with them over coffee a couple of times, I’ve found them to be a dynamic, young team under his leadership—a small but sharp presence.

Guangmi’s thoughts are well-structured, and his breadth of knowledge and understanding of the larger context are impressive. This is no small feat, as the landscape of large models, both in terms of the models themselves and the industry, is often akin to the parable of the blind men and the elephant. Even top experts and business leaders struggle to assess the full picture. Just recently, Meta’s Zuckerberg responded to a question about whether the AI arms race would deliver the expected AGI returns, essentially saying: “No one really knows, but we can’t afford to miss out,” reflecting a typical FOMO (Fear Of Missing Out) mindset.

We’re currently in a delicate phase with little consensus. However, the few tech giants that have propelled Nvidia’s stock to astronomical levels won’t allow the arms race to slow anytime soon, as it is central to their tech and business dominance. OpenAI continues to raise funds, and Ilya, with his new company, recently secured more investment, all of which keeps the race heated.

At the same time, the obsession with scaling among tech elites and the mainstream AGI circles in Silicon Valley persists. The endless demand for resources driven by this scaling wave of large models means that only a small circle of tech insiders has the opportunity and resources to experiment, sense, and adjust the roadmap.

According to Guangmi, the so-called self-play RL scaling is currently gaining traction within a small circle of about 200 tech elites in Silicon Valley, indicating that this is still a nascent trend—one that even management leaders have not fully aligned with yet.

It seems Guangmi adopts a “prophet” mentality at times, perhaps exaggerating this trend to alert his audience. He even suggests that if he were a large-model entrepreneur, he would focus 200% of resources on RL, betting on it as the future path to victory.

In reality, for most people, this advice is neither practical nor actionable—it’s likely aimed at tech giants or unicorns, though even for them, it may fall on deaf ears.

Reinforcement learning is inherently challenging. Even the open-source leader Meta LLaMA 3 has chosen to sidestep RLHF in post-training alignment. So, it's even less realistic to expect large-model teams to fully bet on RL as the core of a new ecosystem. Furthermore, this trend is, at best, a “subtle undercurrent” in Silicon Valley. We’ll likely have to wait until OpenAI’s “Strawberry” or the new version of Claude releases later this year to fully assess its impact.

It seems the first chapter of LLM scaling has indeed come to an end. The actionable items in the so-called second chapter might not emerge from lofty, exploratory scaling directions with an uncertain roadmap. Instead, the focus should be on finding market entry points, accelerating applications, and addressing genuine market needs (PMF, product-market fit), especially as the inference costs of top models like GPT-4o/Claude 3.5 become more affordable, and multimodal capabilities (such as advancements in hyper-realistic full-duplex voice and video) further enhance application opportunities.

For the industry, the bottleneck in scaling large-model applications is the sword hanging over its future. This will determine whether the second chapter of the tech adoption curve ends with a soft landing and eventual recovery. As for the arms race, it’s best to leave that to Elon Musk, Zuckerberg, and the billionaire club to continue playing.

Reinforcement learning, as an extension of pre-training, belongs to the realm of “post-training.” When pre-training hits bottlenecks and diminishing returns, strengthening RL is a natural complement. In the simulation of human cognition, pre-training represents the accumulated knowledge of human civilization, while RL applies that knowledge in practice, learning from the environment. This overall approach to intelligent learning makes perfect sense and is the necessary direction for applying large models.

My old friend Lu said: “It’s intuitive that RL is the path we must take because there isn’t enough supervised learning data anymore.”

Indeed, utilizing regenerated data to varying degrees has become common practice. It’s inevitable. Models can already generate data of higher quality than humans, and this will only improve. However, this is not the same as self-play's proactive exploration and data regeneration.

As Mr. Mao pointed out: “RL aligns with the cognitive processes of humans and epistemology. It’s essentially the process of receiving external feedback and being tested in practice. RL is active learning, while training is passive.”

Guangmi's RL paradigm shift suggestion still lacks the necessary catalysts. But this potential trend is worth keeping in mind. It’s best to remain cautiously optimistic and open-minded while watching how things unfold.

 

Related original:

大模型风云诡谲的下半场:scaling 失效?

大模型风云诡谲的下半场:scaling 失效?

广密大模型季报谈AGI范式大转移这篇播客,很值得一听。涉及很多大模型产业重要话题,视野和风格很独到。

“范式大转移”的说法太耸人,但风云诡谲,是当下的写照。那是因为大佬军备竞赛虽然依旧如火如荼,可应用落地却处于难产期,如何 justify 投资是一个巨大的拷问,或泡沫。

三大学习: 监督学习、非监督学习(自学习/预训练)、强化学习(RL,自主学习/self-play),伊利亚曾经专门强调后者作为探索大方向的重要性。广密这里特别强调它是正在到来的大模型AGI之道的范式转变。

此前,大家都知道强化学习主要是两个里程碑:一个是 alpha0 围棋完胜人类选手,震惊了世界 ;另一个是所谓RLHM(人类反馈强化学习),强化了与人类偏好的对齐,成就了ChatGPT的核爆。

现在谈的是大模型新生态可能性(并无广泛共识,只是硅谷小圈子在做、在议)以及大模型“军备竞赛”的新趋向。这个话题的背景如下:

1、 预训练 scaling (更大规模)似乎受困,GPT5 迟迟不出;

2、 军备竞赛的大格局和造势,大厂和大佬不要改变;

3、 技术大佬开始提出新路线图或试图构建新的 scaling law 继续AGI 的征程

广密在podcast里面,观察硅谷动向比较 closely,他的小团队不仅定期去湾区做深度调研,也建立了广泛的联系。在硅谷跟他们喝过两次咖啡聊天,一帮生龙活虎的小年轻在他的带领下,我的印象,是一个小而精干的独特存在。

这台节目的个人风格和视野也非常 unique,喜欢他说话的思路敏捷,有跳跃感,但张儿不散,有一种吸引人的表达力。主持人与他的交互也很丝滑,张弛有度。

听他们唠嗑吧,谈笑间大模型AGI的大趋势貌似尽收眼底。还是值得点赞的。

广密条理非常清晰,所涉及的知识面和大形势观非常广泛,却能present到自己的视角参照系,与LLM社区的思想趋势有较好的映射。这不容易,因为LLM这档子事,无论模型还是产业的 landscape,大多都是盲人摸象。很多大专家、商业大佬也都各有自己的三分地和视角,也很难全面评估形势。Meta 小扎刚前不久面对万卡竞赛能不能得到预期的AGI return的天问,回答说(大意):其实没人知道,但总不想万一错过的(典型的 FOMO心态)。

目前形势处于微妙期,其实还没有凝聚太多的共识。但是把英伟达送上天价的几个富可敌国的大佬/大厂,短期内却绝对不允许停止军备竞赛,这是他们科技商业争霸的游戏。这叫欲罢不能,节奏在他们手中。Open AI 不断融资,伊利亚自己也最近融资成功,这些都是这场竞赛持续热度的浪花。

与之相配合的是技术大佬和硅谷AGI主流技术圈对scaling的执着和痴迷。因为这次大模型 scaling 技术浪潮对于资源的无止境需求,真正能有机会实践、感知并做出调整改变路线图的技术人,也只能是一个很小的圈子。

据广密的信息,这个所谓 self-play RL 新生态趋势,目前是局限在硅谷技术大佬小圈子的共识,他提到大约不超过200人的圈子的。如果信息正确的话,一个在硅谷技术核心圈200人以内的某种共识和议论,说明还只是一个动向,甚至连管理圈还没真正 get it 和对齐。

感觉上,广密有一些“春江水暖鸭先知”/“语不惊人死不休”的心态(LOL),有意强调/夸张了这个趋势,警醒国人,甚至说,如果我是大模型创业家,我会200%资源聚焦 RL 方向,bet on it,因为这是未来赢家的选择,云云。

其实,客观说,对于多数人这个不实在,也无可操作性,最多是说给国内大厂玩家或六小龙听的吧,但其实也是白说。RL 本来就不好玩,连开源标杆 Meta Llamma 3 在最基本的 RLHF 方面都选择绕开来走,就更甭提提倡国内大模型公司全力 bet on 以强化学习作为新生态核心的愿景了。何况后者在硅谷最多也只是一种“潜流”,可能要等年底前OpenAI草莓以及Claude新版发布后,才能对这个所谓新生态的影响,看得清楚一些吧。

这个苗头可以 keep in mind,但上半场确实似乎结束了。真正可以在所谓的下半场作为 action items 的,其实不是这种高大上、带有很强探索性的大模型 scaling 方向的尚未确定的 roadmap,更多是趁着 GPT4o/Claude3.5级别的通用模型的推理成本越来越亲民化、趁着LLM供应商多模态功能在进一步推广和完善(例如超拟人全双工语音的最新突破和工具赋能就会大大增加应用层面的机会,还有视频的进展等), 加快找市场切入点(PMF),专注应用场景真正需求的解决。

对于产业而言,当前大模型规模化应用的困局才是悬在大模型产业头上的利剑,决定了这下半场在 tech adoption curve 下行能不能软着陆和最终平缓回升。至于军备竞赛,让马斯克、小扎等首富俱乐部继续玩继续high就好。

作为“预训练”的延深,强化学习属于“后训练”,在前者遇到瓶颈和 diminishing returns的时候,加强后者是自然的补足。从AI对人类认知的模拟来说,前者是继承人类文明的知识和科技积淀,后者是把这些知识真正用到实处,在环境中学习。这个智能学习的总体思路 makes perfect sense,也是大模型应用必须要走的方向。

所以老友吕兄说:“直觉上RL是必须要走的路,因为supervised learning的数据没有那么多了。”

没错,不同程度利用再生数据,其实已经是日常 practice 了,也不再有以前的“心理障碍”,是一个必然。总体而言,模型就是比人能够更高质量产生数据,而且会越来越好。但这还不是这里说的self-play的主动探索和数据再生。

毛老说的也不错:“RL 与人类的认知过程相符,与认识论一致。实质上就是接收外界反馈,接受实践检验的过程。RL 是主动学习,而训练是被动的。”

广密现在是说,需要研究测把 RL 范式化,提供某种 RL dev toolkit,然后有在各种场景去做 scale up RL 的路线。这个所谓“范式大转移”,没有1-2年的大厂/大佬的推动普及,没有抓手。持谨慎乐观或怀疑的open 心态,静观其变吧。

Professor Ma's long paper out

Here is the link to Professor Ma Yi’s presentation from the Shenzhen Entrepreneurship Forum, in Chinese, recommended.

Professor Ma is a compelling speaker, and his talk is definitely worth listening to. His paper on whitebox transformer, over 100 pages long, has just been released (Yi Ma’s white-box transformer paper is available here).  Unfortunately, I haven’t had the time to dig into it yet. We’ll have to wait until more people have accepted or verified it before delving deeper.

His current claims revolve around using an extremely sparse approach to force transparency in transformers, with results that are reportedly on par with BERT and GPT-2 in many benchmarks. However, this doesn’t mean that he will be able to catch up with GPT-3 or later models anytime soon. But to be fair, it’s not a level playing field—he’s an academic without the resources to compete with mainstream AI in an arms race. What he does believe, however, is that he has opened a door—a path toward explainable AI in large models.

Honestly, I’ve always had a litttle bit doubts about Ilya’s theory explanation of shortest program compression (his Berkeley talk). From an ultimate theoretical perspective—where lossless compression is the ideal—the idea of continually scaling training, deepening, and lengthening learning makes sense, as it pushes the model toward becoming the smallest possible program for universal tasks. Ilya’s theory may hold up in this respect, at least in theory or as an end goal. But in any real-world scenario (e.g., under budgetary constraints, with methodological limitations), it’s hard to call a model purely derived through gradient descent the “shortest program,” because these models appear to be gigantic beasts with "huge circuits" inside, intuitively, should not be considered "short or small".

Models with hundreds of billions or even trillions of parameters are massive monstrosities, succeeding mainly through sheer size rather than through high regularity or elegance. Emphasizing how impressive their compression ratios are or how well they handle lossless compression may help explain the generalization and emergeng abilities in sequence learning from a theoretical standpoint. But in practice, any model at a given time is far from being the “shortest program.”

This highlights an unavoidable distance between theory and practice. Ilya essentially hedged practice with theory along a future time axis, but our immediate reality doesn’t seem to align with this. It’s like a clumsy wrestler trying to brand himself as sleek and slender fashion model.  Visually not a fit, to most of our eyes.

Instinctively, LLMs feel full of rote memorization with significant redundancy. Under real-world conditions, achieving extreme or lossless compression seems impossible.

On the other hand, Professor Ma’s sparsity approach almost feels “over the top.” Enforcing the same weight for QKV directly seems a bit crude and simplistic, yet it still managed to be trained successfully. This shows that there’s a lot of flexibility within transformers—no matter what restrictions or pruning are applied, the model still finds a path out. In this sense, Professor Ma’s pursuit of the “shortest program” is more real and direct—it’s so short that even a human can interprete the process (hence the LLM explainability).

Yet the difference between these two extremes is still mind-boggling. On one side, we have gigantic models, and on the other, extreme simplicity to generate whitebox models. The fact that both approaches work is shocking.

Speaking of simplicity and explainability, here’s an interesting anecdote in AI history: Back in the day, during the era of symbolic MT, one of the earliest deployed systems (Siemens' METAL) for English-German translation used only eight symbolic features (such as human, animal, etc.). The rules were simple, transparent, and easy to explain. This shows that extreme simplicity and rule-based transparency can work in some rough application scenarios (where English and German are linguistically close, making translation easier).

Later, we MT-ers expanded the number of features to the thousands, trying to cover more of the long tail. Even then, it wasn’t perfect. At the time, we thought that with enough effort, we could match the quality of statistical MT. But now, we know that even if symbolic MT could catch up and match statistical MT, it’s still far from competing with neural MT.

So, could we have continued refining features further? It wasn’t because we didn’t want to keep extending symbolic features (similar to one-hot encoding, but with the internal structure of ontology/taxonomy). We wanted to go beyond thousands to tens of thousands of features. But in reality, thousands (of features in size) were already reaching the limit of human experts’ capacity to understand (AI explanability), manage and debug. Expanding further would have been unmanageable.

Meanwhile, how many parameters do mainstream Transformer neural networks have? And the space and granularity they represent are on a completely different scale. Given the vast difference in scale between the two, it’s natural to doubt any efforts to bridge this gap for AI explanability.  How could that even be possible?

That’s why I’ve always felt that explainability in large models is an elusive goal. But Professor Ma is telling the world that they’ve achieved it.

 

 

Relevant link:

Professor Ma Claims to Have Fully Unveiled the Mysteries of Neural Networks

What did Ilya see? -- secret behind success of LLMs

马毅教授的演讲,值得一听

创业邦深圳会议马毅教授的演讲链接在此:https://mp.weixin.qq.com/s/ibxGO_A7H-akpbwf2R2mGw

马教授还是很能讲的,他上面的演讲,很值得听。他的100多页论文也已经放出来了,可惜没时间钻研了,等以后更多人接受或验证后再说。

他目前所做出的 claims,是说用那种极度稀疏化的方法逼迫 transformer 透明化,结果也在多方面匹敌了BERT 和 GPT2。但并不说明短期他有办法赶上GPT3以上。话说回来,那也不公平。他作为教授没有资源去以军备竞赛的方式与AI产业主流打擂台。只是说,从路线上说,他觉得自己打开了一扇门,一条可以通向可解释AI的大模型大门。还是应该赞佩这样的反潮流的教授的。

其实,我也一直隐隐约约对伊利亚说的最短程序压缩论,持有怀疑:从终极目的(理论上以无损压缩作为理想目标)来看,不断加大训练、加深加长学习,结果就是朝着让模型真正成为最小程序,伊利亚理论也许没错。但在任何一个实际条件约束下(例如预算约束、方法论约束),这种纯粹靠 gradiant descent “凑出来”的模型/路径,很难说是最小 program,因为模型看上去就是个庞然大物,谈何“最小”。

千亿万亿参数的超大模型本来就是以大取胜,而不是以精简和规则见长的怪兽(gigantic monster),非要强调自己的压缩率厉害,无损压缩做得好,虽然有从理论上方便说明序列学习达成的通用性、泛化现象以及“涌现”能力,但实践中,在任意一个特定时间条件下的模型,都远远不是“最小程序”。

这是理论和实践躲不开的一种矛盾。在伊利亚那里,实际上他是以未来时间轴,用理论对实践做了对冲。我们的真实感觉并非如此,不敢这么说。就好比一个摔跤选手,都那么笨重了,还非要标榜自己性感、苗条?

直觉上,LLM 里面充满了死记硬背和信息冗余的,在现实条件下其实不可能做到极度/无损的压缩。

但另一方面,马教授也太奇了,他的稀疏化直觉上做得“过分”,QKV直接拉平,看上去有点简单粗暴,但居然也最终能训练出来。可见,transformer 的肚子里的操作空间还是很大的,你给它各种限制,动不动就剪枝(化零),也不用担心它走不出来。这种意义上,马教授追求的才是真正的“最短程序”,短到了连“豆腐脑”的人类都可以看懂路径(hence 可解释性)。

疑问还是这两个极端差距太大。一边庞然大物,一边无限精简,二者都能走通,也是震撼了。

谈到精简可解释,谈个掌故。老老年做 symbolic MT,一个著名的早期的实用系统(西门子的 METAL)做英语德语的翻译,整个系统只用了8个 symbolic features(例如人、动物等),规则简单而可解释,系统也一样上线实用了。可见极度精简和规则化,做到完全透明和人类可解释,在粗线条的应用场景(英语和德语距离较近,翻译难度低),有时候也是管用的。

我们后来把 8 个 features 扩展到千数量级,才擦了长尾的屁股。但也没擦干净。当时觉得,也许认真做可以对垒统计MT的品质(与董振东老师谈过,我们都觉得可以在翻译上最终用符号打败统计的,只是需要时间磨细活),但现在知道即便匹敌了统计MT,也远远不能与神经MT比高下。

那就把 features 往细做,成不?不是因为我们不想继续把 symbolic features (类似于 one hot encoding,但人为在 features 内部强加了类似于 HowNet 的 ontology/taxonomy 的结构性),从千这个量级进一步提升到万的量级。实际情况是,千几乎已经达到专家人脑的极限了,再扩大 features 的范围,我们就无法掌控和调试了。

可是,神经里面有多少 params 啊,其所能反映的 representation 的空间和细密度,与千量级的 symbolic features,尺度完全无法比拟。二者表征的尺度如此悬殊,对拉近二者距离的任何努力,我们天然会产生怀疑:这怎么可能做到呢。

所以一直就觉得大模型可解释性是一个可望不可及的目标。马教授告诉世人,他们做到了。

相关链接:

马毅教授称,已经揭开完全揭开神经网络的面纱

NLP老司机的AIGC旅程

今天想到做个小结,以“玩”的心态,回顾一下前两年的AIGC旅程,以及一个NLP老兵一路走来的心路历程和感受。‍‍‍

大模型爆发前,最痴迷的是当时就有的 txt2img 文生图模型。当时尝试过很多种工具,“小雅”就是那个阶段的产物。不仅人物,也做过各种绘画风格,在群里和博客也分享多次。后来疲劳了,就不怎么玩了。

开始对数字人感兴趣,2D 的 talking photo,2.5D 的有姿态虚拟主播,以及 3D 舞蹈等。因为是自家产品「奇妙元」,玩起来没限制,作为“产品体验官”,疯玩了一阵子。

可惜数字人的黄金时期转瞬即去,还没来得及起飞,就开始鱼龙混杂、遍地开花了,市场给卷的。

紧接着对于超拟人/超写实配音,以及跨语言的突破,包括最近“双工”的突破,各大头部模型开始显摆自己的语音亲民能力,与普通真人无异,不再是板着腔调的播音味了。 咱们自家的AIGC拳头产品「魔音工坊」赶上了这波语音tokens化的端到端大模型浪潮,也实现了超写实,那是大约半年前的事儿。意义重大,因为语音是所有copilot类大模型应用的最自然的接口,也是数字人和短视频的必要赋能点,但语音从可玩性上,不如音乐生成。

Suno 惊艳登场,我入迷了几个月,实现了自己也做“音乐人”的梦想。当然,现在也淡化了,不是不好,是没时间玩了。

时间被中国的 Sora,快手可灵AI的视频生成大模型占用了。视频生成疯玩到今天,我用它做了很多儿时的回忆,定格和再现了人生的高光时刻,虚拟了超生活的场面,最 high 的时期也过去了。这一通尝试,包括三分钟视频连续生成的极限试验,以及种种提示词工程探索,对当前视觉大模型的优点短板看得比较清晰了。

视觉模型的重要应用形态之一就是“一键成片”,也是自家产品了,叫「元创岛」。 目前还很粗糙和简陋,但的确做到了“傻瓜”制作能力,零门槛,任何人都可以用它来生成视频。显然有落地场景和起飞的迹象。

这种对多模态体验和迷恋,想起来与一辈子只做文本NLP得经历,本来是格格不入的。但背后有个大模型的宏大背景。原来,LLM炸平了NLP后,马不停蹄,又开始炸平多模态。这种通用性让人觉得这一切有着共同的主线贯之,是自然的技术汇合之流。这是从模型研究的心路历程看。

从人文和科技结合的角度看,我们这种“老文科生”与生俱来对于人文、艺术的追求本性,并没有因为在工业界的码农环境“挖煤”几十年,而(被)湮灭,应用到如今又是一个自然汇聚。这有点像乔布斯当年的说法,他追求的就是人文意味的科技产品,工程结合美学品味,嘲笑微软产品的粗鄙,no taste。

想想这一路走来挺有意思,无论研发还是应用,冥冥之中都在汇聚。而我们何等有幸见证、经历和投入到这种汇聚的潮流中,虽然这个汇聚也同时意味着颠覆自己、碾压自己、否定自己的过往,抛弃很多过去的“绝技”,例如曾经做到世界顶尖的符号解析(symbolic parsing)的庖丁解牛之术。 靠的是终身学习,不至于掉队太远。但一切的一切,更需要一种 精神,尤其是 passion:passion 所驱,乐此不疲。

下一个passion点 应该是 to b 场景,因为最终的应用大期待,大概率在垂直。To c 虽然很卷,但路线图和态势,能做什么,包括 aigc,已经基本清晰。但 to b 还在泥潭里挣扎,方向都还隔雾看花,闪闪烁烁,但也看到高人。例如白硕老师,感觉他就在捻须微笑,坐在金融交易的莲花池上,仗着to b 积淀。

个人而言,垂直赛道,最喜欢是教育,其次是法律,这都在大模型知识能力的路上:既容易最终被通用大模型碾压,又立即能对齐场景呈现价值。金融太繁琐,水更深。水利、电力、汽车等非常专门,行外人感觉枯燥。但医疗和心理,却很诱人,虽然比教育、法律更难涉入。看命运之神领我何往吧。

Decoupling to Resolve: Issue of Character Consistency in Video Generation

I’ve now become the go-to expert for AIGC (AI-generated content) "custom services" among my old friends and classmates, just for fun. Below are nostalgic videos made from old photos that two of my classmates asked me to create.

Whenever I find the time, I’m more than happy to provide this kind of emotional value for friends and family because it’s truly satisfying to see their reactions of surprise.

The pianist is now a world-class piano master, frequently touring and performing in Europe, America, and China. These are precious old photos of him practicing and performing with our mutual friend, Brother Sun, in Philadelphia back in the early days.

Dr. Bai Shuo, a seasoned expert in NLP and a multi-talented musician, commented humorously: “Looks real for someone who pulls on the bow in  Meditation as named, but the bowing and fingering are all wrong.”

Another old friend also left feedback noting that the visual model doesn’t understand music: "This needs improvement! It's obvious that the model was created by someone who doesn’t know how to play the violin or piano. The bowing and piano accompaniment are off. The first note has a two-and-a-half beat long tone, which should be played with a long bow. Additionally, the pianist’s right foot should never be raised or shaking like that—it should be on the sustain pedal.”

LOL

Even though the music's name Meditation was clearly specified in my prompt during generation, there is no model, in the foreseeable future, that can truly align the understanding of music with the intricate details of bodily movements during performance. Perhaps this can be reserved as one of the ultimate challenges for large models aiming for AGI, because theoretically, if enough alignment data of musical performance is available, based on the compression theory of "joint training", it’s possible to aim at perfect alignment across different modalities.

If simulating the objective world is the ultimate goal of visual models, then the current generation of visual models is at the level of “playing the piano to a cow” or “playing music to a tone-deaf audience”—completely unable to withstand scrutiny from musicians. For example, as someone with little musical knowledge, when I watch the nostalgic performance videos above, I wouldn’t notice the flaws as an expert would; instead, I find them vivid and emotionally engaging.

Of course, the standards of musicians might as well just be a "pseudo-demand" or a pseudo-goal (even if the visuals satisfy the picky “expert eye,” so what? Will it sell well?). It might not be worth the effort to pursue this. However, in theory, an ideal AGI should be capable of meeting these expert-level demands.

This is the challenge of musical performance alignment.  Another challenge to Sora-like video generation models is character consistency in videos.

Achieving facial consistency in generative visual models is an extremely difficult. Don’t expect this issue to be resolved by video generation models alone in the short term, especially not through autoregressive methods.

Human eyes are extremely discerning with regards to face recognition, especially when it comes to familiar faces of friends and family—you can immediately tell when a character's appearance is off. For example, while playing with old photos recently, I used the KeLing model (top notch Video Model in China) to generate a video of myself. At the 5-second mark, it still looked passable, but by 10 seconds, it no longer resembled me.

In the second 10-second video, just a slight turn of the head, and it’s no longer me—it looks more like my brother. How can a model handle such fine details? Especially when the starting image for video generation is not even a straightforward frontal shot, making the character information incomplete—how could it not go off track?

While the videos I've made for friends and family using KeLing during its public testing phase have generally been met with passionate surprise and amazement, most of them suffer from this issue of character consistency, which is a regret.

The current one-click video generation products on the market (including our own YuanChuang Island recently launched) tend to mainly use anime or manga styles. This is to avoid user scrutiny since these styles lack 3D distinct individual characteristics. As long as there is consistency in attire, no gender mix-ups, with age and race alignment, most people will accept it. The current one-click videos are generally rough, with entertainment value primarily in the story rather than character portrayal akin to a Hollywood blockbuster. However, as this path progresses, it will inevitably encounter the challenge of maintaining the consistency of digital IP actors and their roles.

My colleague, Lu, mentioned, "the consistency issue might require cross-checking from multiple video angles, which more or less touches on the core issue of whether modeling is necessary."

Indeed, some form of cross-checking is required, not just monotonic correction over time/sequence—that is indeed the key. There’s a need to decouple or separate the character's image from the storyline, rather than generating in a linear, one-way path. While sequence learning has indeed produced miracles in LLMs, sequence generation inherently has limitations, including random deviations over time. Although it's not as extreme as LeCun's criticism—where he says GPT's error accumulation is a tiny discrepancy that leads to a significant miss—his claim isn't entirely accurate because GPT's autoregressive operation also corrects and adjusts its course at every step in the context. Nevertheless, when it comes to fine-grained consistency, random deviations are almost impossible to handle, even with corrective mechanisms in place.

Hence decoupling, decoupling, decoupling! Decoupling can solve the problem. The world isn't limited to sequences. Beyond sequences and time, there is a constant abstraction (i.e., character image, or IP) that can be utilized. This is becoming increasingly clear. Take, for example, the digital IP character Maria (Xiao Ya) that I created using AIGC txt2img more than 2 years ago::

Unless they’re fans, perhaps my numerous Maria videos might cause aesthetic fatigue—someone even called her “Dr. Li's fairy” (LOL). But indeed, there are fans; several of my old classmates are among them.

Why? Because she is an IP, and she has been decoupled.

 

Related Links (original posts in Chinese):

视觉模型生成的极限对齐

解耦才能解套:再谈视频中的人物一致性问题

 

解耦才能解套:再谈视频中的人物一致性问题

前两天说过,对于生成式视觉大模型,人脸一致性是个非常难缠的东西,不要指望短期靠视频生成大模型本身来解决,尤其是不能指望自回归解决这个问题。

人眼太挑剔了,尤其是亲友和熟人,你会一眼看出人物走形了。譬如这几天玩老照片,我用头部视觉模型可灵5秒生成我自己,还过得去,到了10秒就不是我了。

10秒视频:

一转脸就不是我了,倒更像我哥。这种细粒度,模型怎么能搞定。尤其是,如果图生视频的起点图就不是正面照,character 信息本就不全,怎么可能不走偏。所以,我在可灵公测以来为亲友同学做的视频虽然普遍感觉惊喜或震撼,但大多存在这个人物变形的问题,成为一个遗憾。

现在市面上做的一键成片产品(包括我们的元创岛),其所以用二次元或其他夸张风格为主,是为了避免用户的挑剔,因为那些形象缺乏鲜明的个性,不是真正的 individual IP,只要保持穿戴一致性,男女不要错位,年龄和种族不要相左,一般人也就接受了。目前的一键成片普遍比较粗线条,娱乐价值更多是为视频里的故事,而不是好莱坞大片那样的角色形象刻画。但这条路往上走,就躲不开这种 digital IP 的演员角色定位及其一致性问题。

吕兄说:一致性问题可能需要靠多角度视频的cross-checking, 这里面多多少少要涉及到是不是要建模的硬核问题。

是的,要某种 cross-checking,而不是时间流单调矫正,这是key。需要解耦/剥离故事线上的人物形象,不能生成一条道走到黑。序列出过奇迹,但序列生成有随着时间出现随机偏差的局限,虽然不是 LeCun 批评的那样极端:他说gpt错误积累是差之毫厘失之千里;他的说法不正确,因为gpt的“自回归”推理方式也在每一步context自回归中不断纠错、矫正航向。尽管如此,对于细线条一致性,随机偏差哪怕有了矫正机制,也是基本搞不定的。

因此,解耦、解耦、解耦。解耦就可以解套。世界上也不是只有序列。跳出序列和时间,还有个恒定抽象(即character形象)可以利用。这一点已经越来越清晰了。以我制作的数字人IP形象小雅/Maria为例:

除非粉丝,也许我的众多小雅视频会引起审美疲劳吧,有人称她为“立委的妖精”(LOL)。但确实有粉丝,老同学中好几位人物就粉她。

为啥,因为她是IP,解耦了。

Related Links:

视觉模型生成的极限对齐

马毅教授称,已经揭开完全揭开神经网络的面纱

原创 立委 LiWeiNLP 2024年09月01日 17:44 北京

马教授的白盒transformer论文在:https://ma-lab-berkeley.github.io/CRATE/?continueFlag=680deb7516c156566f8eb73fdcc896ca

马毅教授大名鼎鼎,特立独行,一方旗帜人物,天下无人不识君。值得关注的是他最近对主流大模型的批评以及他自己工作的宣示。

前不久深圳会议请他来,他把主流大模型、伊利亚,以及k氏复杂性理论,批为中学生的水平,说他们根本不懂理论。而他自称理论实践双突破,说的就是他带领团队做的这个白箱 Transformer,不仅揭开了大模型的神秘面纱,而且是工程可实现的替代方案。

说句实话,把话说到这份上的人,多半是真牛人,有自带的底气和傲视群雄的霸气。对于主流的批判,老美有个杨立昆(他说GPT不如他家一条狗,是死路,他的世界模型才是替代方案),中华有个马教授,世界才显得不那么寂寞。也确实希望他们somehow能弥补当前主流大模型的“慢思维短板”,推动AI的整体进步。有时间还是应该慢慢研究他的学术和实践,但也许更多是要等待时间和同行对他们工作的检验和复现。

深圳会上就露了个脸,自己讲完和批完,立马走人。估计是真忙。

论文100多页,说过几天就放出来。看现在的 outline,重点在,quote:

Overall, CRATE is similar to a transformer, with two differences:

in each attention head, the Q,K, and V weight matrices are weight-tied, i.e., set to be equal;

and the nonlinearity following each attention layer is no longer a multi-layer perceptron (MLP), but rather a more structured operator (ISTA) with sparse outputs.

咱们了解一下,ISTA(Iterative Soft-Thresholding Algorithm,迭代软阈值算法),是一种用于解决稀疏优化问题的算法,在机器学习领域有广泛应用。在CRATE架构中, ISTA被用来替代传统Transformer中的多层感知器(MLP),还记得前不久的 KAN 的创新也是旨在平替 MLP。都是在 Transformer 里面开刀。

我浅薄的理解,ISTA 与 KAN for Science/Physics 的思路是一致的,就是要经过某种正则化或剪枝,最终拟合成稀疏化路径,从而获得可解释性。

工作原理: ISTA通过迭代的方式逐步接近问题的最优解。每次迭代包括两个步骤: a) 梯度下降步骤,这与主流同;b) 软阈值操作。增加这个操作,是要在两个目标之间找平衡:

a) 使模型尽可能准确;b) 使模型尽可能稀疏,即简单(因为人很难理解一团乱麻)。

软阈值操作鼓励内部元素变为0,从而产生稀疏的输出,增加可解释性。权重绑定的注意力机制和ISTA(迭代软阈值算法)促进了模型对输入数据结构的更深入理解。这更像人类的结构化解析过程,抓大放小,正则化、规整化。

马教授说的是,它们经过上述的两个改造,结果在监督学习中(后来他们也成功地用到了自监督学习),就自然学到了人类的这种结构化、稀疏化所带来的可解释性。

拿图像识别来说,研究发现某些注意力头对应于动物的不同部位。更引人注目的是,这种对应关系在不同动物甚至不同动物类别之间保持一致。例如,专注于"头部"的注意力头在处理不同种类的动物时,始终关注其头部区域。这种一致性表明CRATE已经学习到了跨类别的通用视觉特征表示。

但做LLM可解释性研究的人早就发现,在MLP网络的尽头,各种结构化(例如 头、脚)的部件也都被注意力抓到了,否则不能解释LLM表现出来的泛化(或压缩)能力。困难的是在网络的前期,那些个注意力比较混杂,主流的MLP搞不清都在注意什么。隐隐约约是在注意像素(点)、线等基础元素之间的关系。

可解释AI的基本思路都是一样的,就是要把一团麻一样的黑箱多层网络内部数据拟合中凑出来的路径,改造成强加了种种限制条件和剪枝、化零的“结构化”路径。

正常人,谁不愿意模型可解释?所以这种稀疏精简的路数/算法一定也是有极大的挑战,才 so far 无法与那种黑箱子瞎凑的方法比试。

马教授觉得自己很牛的底气是因为他在最近半年一年,开始用可解释的上述白箱方法,训练出了可以匹敌传统 transformer 的成绩。他在深圳会上说,此前他有信心这才是正道,但在没有拿到结果前,还是必须谨慎。现如今,他觉得他的跨国团队用这一套在各方面所做的实现,已经让他满意到可以向全世界宣告,他找到了理论和实践的突破口,找到了transformer白箱化的正确方法,可望引发范式转变式的深度学习的突破。这令他很激动,也很霸气。所以他不再满足于大学教授的理论建树,而是觉得应该下场了。马教授自己成立了一家公司要在工程上做这个推进。这次来创业邦深圳主题宣讲,据他说,是他第一次以这个新公司的名义来向社会宣布这个挑战主流的工程落地的方向性大项目。

不过,凭着多年经验和直觉说一句这件事的挑战性(或可疑之处):人类的可解释性建立在一个非常小的高度精简的 finite set 基础之上。如果拿 symbolic features 来说,超过千这个量级的 feature system,对于人就是不可解了。贵为上帝选民,我们人类的“豆腐脑”真地很局限。可是另一边看 transformer 里面的 parameters 以及 attention heads 的KQV数量,完全不具有可比性,天上地下。让大变小在这样的尺度下去归约,感觉上是不可思议的。

KAN for Science 之所以成功,是因为他们瞄准的 target 极为狭窄,只是 science 中的某些已有的解析公式,或者未知的但局限在少数参数的潜在公式。有了这样的目标去做剪枝,中间加上科学家的人为干涉或反馈,KAN 声称是做到了可解释性。

Anyway,马教授似乎是成竹在胸,咱们还是静观他的法术/魔术。

Related Links:

What did Ilya see? -- secret behind success of LLMs

 

 

Professor Ma Claims to Have Fully Unveiled the Mysteries of Neural Networks

Professor Yi Ma’s white-box transformer paper is available here.

Professor Ma is a prominent figure, renowned for his distinctive style and leadership in the field. His name is widely recognized and respected. Of particular interest recently are his critiques of mainstream large models and the bold claims he has made about his own work (see his post in Chinese below).

Recently, at a conference in Shenzhen (which I attended with my own talk too), Professor Ma sharply criticized mainstream large models, Ilya, and Kolmogorov complexity theory, dismissing them as being on the level of high school students and claiming that they lack a true understanding of theoretical concepts. He asserted that he has achieved breakthroughs in both theory and practice, particularly with the white-box Transformer developed by his team. According to him, this model not only demystifies the complexity of large models but also offers an engineering-feasible alternative.

When someone speaks with such confidence, it usually indicates genuine expertise and a commanding presence. Just as Yann LeCun in the U.S. criticized GPT as being inferior to a dog and called it a dead end, proposing his world model as an alternative, China has Professor Ma. Their critiques balance the global discourse, making the world feel less excluding. There is indeed hope that their work might address the "slow thinking" and "interpretability" shortcomings of current mainstream large models and contribute to the overall advancement of AI. Professor Ma’s academic and practical work deserves close study, though we may have to wait for time and peer reviews to fully test and validate their findings.

At the Shenzhen conference, after delivering his talk and sharp critiques, Professor Ma left immediately, likely due to his busy schedule.

The paper is over 100 pages long and is said to be released in a few days. Based on the current outline, the key points are as follows:

Overall, CRATE is similar to a transformer, with two differences:

- In each attention head, the Q, K, and V weight matrices are tied, i.e., set to be equal.
- The nonlinearity following each attention layer is no longer a multi-layer perceptron (MLP) but rather a more structured operator (ISTA) with sparse outputs.

Let's examine ISTA (Iterative Soft-Thresholding Algorithm), a widely used algorithm for solving sparse optimization problems in machine learning. In his  CRATE architecture, ISTA replaces the traditional MLP in Transformers. Not long ago, KAN also introduced innovations aimed at replacing the MLP, both approaches representing surgeries within the Transformer architecture.

In my understanding, ISTA and KAN (for Science/Physics) share a common goal: through regularization or pruning, they ultimately fit a sparse path, thus achieving interpretability.

How it works

ISTA Iteratively approaches the optimal solution of a problem. Each iteration involves two steps: a) a gradient descent step, which aligns with mainstream methods; and b) a soft-thresholding operation. This operation is added to balance two objectives:

a) Maximizing model accuracy;
b) Achieving model sparsity, i.e., simplicity (as overly complex models are difficult for humans to interpret).

The soft-thresholding operation encourages internal elements to become zero, resulting in sparse outputs and increased interpretability. The weight-tied attention mechanism, combined with ISTA, promotes a deeper understanding of the input data structure, resembling a human-like structured analysis process that prioritizes key elements while regularizing the data.

Professor Ma claims that these two modifications naturally lead the model to learn the interpretability associated with human-like structuring and sparsity during supervised learning (and later as claimed successfully applied to self-supervised learning too).

For example, in image recognition, it was observed that certain attention heads correspond to different parts of animals. What's more remarkable is that this correspondence remains consistent across different animals and even different categories of animals. For instance, an attention head focused on the "head" consistently pays attention to the head area when processing different kinds of animals. This consistency suggests that CRATE has learned a general representation of visual features across categories.

However, those studying LLM interpretability have long discovered that at the end of MLP networks, various structured components (such as heads and feet) are also captured by attention mechanisms. Without this, it would be difficult to explain the generalization (or compression) capabilities exhibited by LLMs. The challenge lies in the early stages of the MLP network, where attention is more mixed, and mainstream researcher struggle to clarify what the attentions heads are focusing on. It seems that they are vaguely paying attention to the relationships between basic elements like pixels/dots and lines.

The core idea behind explainable AI is consistent: transforming the tangled, black-box, multi-layer network's internal data fitting paths into structured paths that are enabled with various constraints and pruning, leading to a sparse representation.

Who wouldn’t want a model to be interpretable? However, achieving sparsity and simplicity is extremely challenging, which is why, so far, these approaches have struggled to compete with the black-box methods that involve randomness.

Professor Ma’s confidence stems from the fact that, in the past six months to a year, he has begun to train models using the explainable white-box methods mentioned above, achieving results comparable to traditional transformers. At the Shenzhen conference, he mentioned that while he had always been confident that this was the correct approach, he remained cautious until results were obtained. Now, he believes that his cross-national team’s achievements with this approach have satisfied him enough to announce to the world that he has found a breakthrough in theory as well as practice, the correct method for white-boxing transformers, which could lead to a paradigm shift and a breakthrough in deep learning. This has made him both excited and confident. Therefore, he is no longer content with academic theoretical achievements alone; he feels compelled to take actions in industry as well. Professor Ma has recently founded a company to advance this work on an engineering level. At Shenzhen, he announced a directionally significant project challenging the mainstream, first time under the banner of his new company.

However, based on my years of NLP experience and intuition, I must point out a challenge (or potential issue): Human interpretability is built on a highly simplified finite set. If we consider symbolic features, a feature system with more than thousands of elements becomes incomprehensible to humans. But on the other hand, the number of parameters in transformers and the number of KQVs for attention heads are on a completely different scale. Reducing such complexity on this scale seems almost unimaginable.

KAN for Science succeeded because their target was extremely narrow—certain existing symbolic formulas in physics or potential formulas limited to a few parameters. With such a goal, pruning, along with scientist intervention or feedback, allowed KAN to claim interpretability.

Regardless, Professor Ma seems confident, so we would like to observe how his methods and results evolve and will, or will not, be accepted.

 

 

Related Links:

What did Ilya see? -- secret behind success of LLMs

 

创业邦深圳会议演讲笔记:推动AIGC商业落地,出门问问的「产模结合」实践

8月22日,2024AGI商业趋势大会在深圳成功举办。活动由创业邦主办,南山区科技创新局、西丽湖国际科教城融合创新中心、深圳市“模力营”AI生态社区为支持单位,揽胜运动为大会指定用车,搜狐为战略合作媒体。

本届大会以“AI塑造未来市场”为主题,邀请人工智能领域的行业专家、上市公司、独角兽企业、创新企业、投资机构代表,分享研讨最新的技术、创投、治理热点话题,并围绕产品商业化、人工智能+行业解决方案进行深度交流,共同探寻更广泛的产业合作与创新机遇。

会上,出门问问副总裁李维在主题为“盈利为王,AIGC从技术创新走向商业落地”的演讲中精彩观点如下:

  1. 大模型的出现,就像翻越了语言的大山,统一了这些不同的赛道,建立起了一种类似圣经中巴别塔的通用语言能力。
  2. 历史上,技术革新往往遵循一条被称为“technology adoption curve”的路径:一开始,人们对于重大创新和突破趋之若鹜,但当这些创新在商业化、盈利和实际应用方面遇到瓶颈时,就会出现一段回落期。
  3. 在通用人工智能时代,一个模型可以处理各种任务,这使得许多细分领域的创新空间被压缩。以前,每个细分赛道都有机会诞生超级应用,但现在这种可能性大大降低了。

以下为演讲内容,由创业邦整理:

很高兴有机会跟大家分享我们在AIGC赛道上的一些工作。我叫李维,是出门问问的VP。我今天演讲的题目叫“盈利为王,AIGC从技术创新走向商业落地”。

实际上这个题目挺难的,原因在于AI和大模型非常热闹,但是真正走向商业落地的寥若晨星。所谓“AI一日人间一年”,大模型的发展虽然很快,但从总的趋势上来看,AI实则进入了一段下行期。历史上,技术革新往往遵循一条被称为“technology adoption curve”的路径:一开始,人们对于重大创新和突破趋之若鹜,但当这些创新在商业化、盈利和实际应用方面遇到瓶颈时,就会出现一段回落期。我们现在正处于这一回落下行期,但尚未触底。

这一趋势也映射到AI投资上,今年对于大模型公司来说,融资变得极具挑战性。主要原因是大模型非常烧钱,算力消耗往往远超人力成本。然而,只投入不产出的模式是不可持续的。尽管大模型拥有巨大的潜力,但要实现盈利并不像看起来那么简单。

大家都在谈,上一次技术革命是移动互联网,虽然它的技术含量可能不如大模型带给人的震撼,但它确实催生了一系列超级应用,很好地满足了人们“吃穿住行、娱乐和通信”等基本生活需求。这些超级应用已经在上一代技术革命中达到了很高的水平,留给后来人创新提升的空间已经不多。想要通过大模型在这些领域实现进一步的突破,变得非常难。

ChatGPT的出现标志着通用人工智能的一次大爆发。在通用人工智能时代,一个模型可以处理各种任务,这使得许多细分领域的创新空间被压缩。以前,每个细分赛道都有机会诞生超级应用,但现在这种可能性大大降低了。

我举个例子,机器翻译并不是大模型时代的新现象,早在12年前神经网络革命开始时,机器翻译就是首批受到影响的领域。结果,像百度、谷歌这样的大公司都把机器翻译服务做成了免费产品。一旦一个领域变成了免费或白菜价,创业的空间就大大减少,甚至可能扼杀整个产业。类似的情况也出现在大模型普及之后,比如Jasper。在ChatGPT出现之前,大模型刚刚兴起,那时业内人都知道GPT3。Jasper利用大模型的能力,将其包装成可以帮助人们进行写作和生成各种文案的工具,吸引了一批客户。

随着ChatGPT等通用大模型的崛起,Jasper的优势逐渐消失,开始走向衰落。ChatGPT不仅仅是一个超级应用,它通过人机对话的方式,实际上已经成为一个“超级的超级应用”(super super-apps),它超越了传统的界限。通用大模型现在能够处理各种语言、知识,甚至多模态的内容,如语音、音乐、图像和视频。这种广泛的能力使得通用大模型在很多领域都占据了主导地位,挤压了相关赛道的生存空间。

如今,作为投资人或创业者,在寻找发展方向时,需要经常思考一个问题:我现在所在的领域或开发的App是否具有持久的竞争优势?虽然可能暂时吸引一些客户并满足部分需求,但如果我的项目挡在这个大模型自然扩散能力的路上,那么我最终很可能面临像Jasper那样的衰退命运。

通用大模型的能力如此之强,被寄予很高的期望,结果在真正落地应用的时候,才发现原来有很多限制条件在。一方面,上一代技术已经相当成熟,吃穿住行等低枝果实的超级应用已经做到极致;另一方面,大模型本身还在不断发展,其通用能力常常限制了落地赛道的发展空间。因此,大家普遍发现将大模型规模化落地应用是一件相当困难的事情。

我曾是大模型的热烈支持者,也是在中国最早“鼓吹”大模型的人之一。当时非常激动,因为在我一辈子的NLP职业生涯中,从未见过如此革命性的变化。

在NLP领域,过去我们有机器翻译、对话系统、问答系统等多个专业方向,甚至还有分词这样的细分技术。但大模型的出现,就像翻越了语言的大山,统一了这些不同的赛道,建立起了一种类似圣经中巴别塔的通用语言能力。大模型的出现,彻底改变了NLP行业的格局。但实际应用起来,我们发现它比我一年多前想象的要困难得多。例如,以NLP为方向的应用(如各种文案或翻译的 co-pilot)因为已经被头部大模型搞定,这个方向的创业产业就做死了。

现在,大家都在期待大模型原生的超级应用(LLM-native Super APPs),虽然呼声很高,行业内竞争激烈,但真正能够规模化落地的,目前还只有像ChatGPT、豆包、文心一言这样的通用类的toC应用。

简而言之,这种超级大模型正逐步深入社会的各个层面。然而,除了它之外,我们还没有看到其他大模型实现规模化落地的成功标杆案例,大家都在艰难跋涉,试图找到与通用大模型基本能力有差异化的突破口和切入点。

目前的情况是,无论是行业内外人士、投资者还是创业者,都对大模型的技术潜力广泛认可,这仍然是基本共识,但要找到它的market fit并实现规模化落地,可能需要至少两三年的时间探索:现在处于技术落地的难产期,也是突破期。

大模型虽然功能强大,但也存在一些严重的短板:第一,信息准确性问题:由于学习了大量信息,大模型可能在记忆不精确的细节时出现错误,导致其输出内容不准确,所谓“幻觉”。第二,可控性问题:与以往的符号逻辑AI不同,大模型包含百亿千亿参数,运作起来像一个巨大的黑箱,难以进行精确的调试和控制。第三,安全性问题:大模型可能存在安全隐患,公开上线需要谨慎。第四,算力成本问题:尽管随着技术进步,算力成本有望降低,但与上一代边际成本趋于零的许多应用相比,使用大模型的应用在算力成本上仍然相当高。推理时也常常遭遇高并发的瓶颈。普及到 toC的大众市场,巨大的推理成本直接影响盈利能力。

以上的分析回顾,听起来有些悲观。但长远一点来看,特别是在接下来的三到五年内,还是值得期待的,尤其是垂直赛道(尽管垂直领域尚未迎来大爆发,但许多人在积极探索)。今天上午座谈会的嘉宾们也在分享他们如何将大模型技术应用到自己的领域,都是从需求出发的第一线分享。这些努力最终将取得成效,预计在未来几年内,垂直领域将会迎来显著的发展和突破。

大模型的研发成本高昂,这对创业公司来说是个挑战,而且模型与产品之间的脱节也是一个主要问题。我们目前正在探讨如何将模型与产品有效结合,以克服这些困难。接下来,我将分享我们在这方面的探索和经验。

模型开发可以选择自主研发或使用第三方服务,这取决于产品的具体需求。目前市场上竞争激烈,第三方服务往往价格低廉,更具成本效益。但如果产品需要高并发处理能力或有特定的定制化需求,第三方服务可能无法满足,这时自主研发就显得重要。

这就需要探索如何将模型开发与产品需求紧密结合,以实现成本控制和产品定制的灵活性。这包括调整大模型以适应我们的产品需求,实现最优的产品模型结合。

然而,产模结合说起来容易做起来难。因为模型开发和产品开发是两个不同的领域,涉及的人员背景和思维方式各异。要让这两类人员有效协作,发挥出最大的协同效应,对许多公司来说都是一大挑战。

我们开发了自己的大模型,名为"序列猴子",基于此,我们推出了多模态AIGC产品,涵盖数字人、配音和短视频一键生成等功能。此外,我们还成功推出了出海产品"DupDub "。"魔音工坊"是我们在市场中占据主导地位的产品,尤其在抖音平台上,约70%的声音内容都使用了我们的技术。

我们的目标客户群体包括内容创作者(ToPC,to professional consumer)和小型至中型企业(ToSMB,to small medium businesses)。内容创作者愿意为方便他们工作的工具付费,而我们正是为他们提供这样的工具。对于ToB客户,我们专注于为中小企业提供较为标准化的解决方案,因为大型客户的定制化需求较为复杂,不易操作。目前,我们拥有86万付费用户,这证明了我们的服务已经成功落地并得到市场的认可。下面是我们产品的一些展示。

转述老领导的硅谷风投现状和展望的分享

摘要

1. 未来几年内 65% 的风险投资公司将倒闭。

2. 凭借 2010 年代的出色回报,风险投资一路高歌猛进。

3. 2021 年,1577 家不同的风险投资公司共筹集了 1830 亿美元。

4. 但与此同时,创办一家初创公司的成本却越来越低。广泛可用的工具、全球劳动力和便捷的(在线)分销意味着创办一家 SaaS 公司从未如此简单或便宜。

5. IPO 窗口关闭了——公司无法上市。因此,风险投资公司无法通过大型 IPO 赚钱。

6. 并购也没有发生(至少对卖方来说价格不高)。因此,风险投资公司无法通过出售公司来赚钱。

7. “我们决定不再筹集另一只基金。” 翻译:他们可能做不到。更多的公司会说他们“不再投资”;合伙人“决定担任运营职务”。董事总经理正在退休。

8. 2023 年,597 家风险投资公司筹集了 810 亿美元。这分别下降了 63% 和 56%(与 2021 年相比)。

9. 风险投资的狂欢派对结束了。或者至少这一章结束了......

10. 超过 50% 的现有公司将无法生存。这意味着,如果您是在这种环境下筹集资金的初创公司首席执行官或运营商,您需要了解游戏规则已经改变。

11. 不要相信一些创始人仅凭 20 万美元的 ARR 和一份好牌就筹集了 3000 万美元的故事。风险投资公司用巨额支票救助经营不善的企业的时代已经结束。

12. 筹集资金的最佳时机是您不需要它的时候。深入挖掘以度过冬天。

以上是我的“老领导” Jonathan前天在LinkedIN关于硅谷风投下行的一种描述。很清醒。

听上去似乎奇怪,大模型爆发以后很快风投成为下行,这其实还是因为 technology adoption curve 本身现在在落地应用上遭遇挑战,整体处于下行,无论中美。

大模型的到来使得初创的启动变得容易,这样僧多粥少,筹集风投就更加困难。

我这个老板一共做过4家初创,第一家是他初出茅庐时候做的 Netbase,我们相处非常好。现在这第四家SAAS创业很扎实,势头很好,有望成为硅谷独角兽。他也历练成熟多了。

他是第一个在LinkedIn上给我写推荐的人,一直保持联系。

图片

Jonathan Spier

Last Friday, I had dinner with a famous VC investor who told me 65% of VC’s will go out of business in the next few years. I believe him. Here’s what that means for startup leaders waiting on their Series A/B/C:

Those of us in startups tend to think VCs are at the top of the food chain.

They aren’t.

VCs are businesses too.

They raise money from THEIR investors (aka LPs).

And their job is to make a return for those LPs.

With outstanding returns in the 2010s, VC was on a win streak.

Many more funds were born.

And existing funds got much larger.

In 2021, 1577 different VC firms raised a total of $183 billion.

But at the same time, costs to launch a startup have gotten cheaper.

Widely available tools, global workforce, and easy (online) distribution mean it’s never been easier, or cheaper, to start a SaaS company.

So how are VCs supposed to deploy all that money they’ve raised?

They can’t.

There is too much money chasing too few deals.

Make no mistake, for VCs, it’s a fatal mix.

The IPO window is closed – companies can’t go public.

So VCs aren’t making money with big IPOs.

M&A isn’t happening (at least not at good prices for sellers).

So VCs aren’t making money by selling their companies.

If VCs aren’t making money, they can’t return capital to their LPs.

They are in trouble.

Of course, VCs rarely go out of business the way their companies might.

Reputations are at stake, so change happens quietly.

But it’s the same result.

It’s already happening.

Listen carefully, and you’ll hear VCs saying:

"We have decided not to raise another fund.”

Translation: they probably can’t.

More firms will say that they are “no longer investing”

Partners are “deciding to take operating roles.”

Managing Directors are retiring.

In 2023, 597 VC firms raised $81B.

That’s down 63% and 56% (vs. 2021).

The VC party is over.

Or at least this chapter is...

The select few at the top of the VC list will have their pick of deals.

The great business builders will choose their spots and continue to thrive.

I’ve been lucky to work with a few of those and am certain that their expertise and relationships will carry them through.

But over 50% of existing firms, won’t survive.

That means if you're a startup CEO or operator raising money in this environment, you need to understand the game has changed.

Don’t buy the stories of some founder that raised $30M with $200k ARR and a good deck.

The era of VCs bailing out bad businesses with huge checks is over.

Many of the VCs won’t even be around in a few years.

There is only one strategy that works in this economy.

Focus.

Nail your ICP.

Delight your customers.

Get profitable to control your financial destiny.

The best time to raise money is when you don’t need it.

It’s a harsh economy out there.

Burrow down deep to survive the winter.

Remember:

Tough markets make strong companies.

视觉模型生成的极限对齐

我现在成了老友同学中的AIGC定制服务的大师了,下面是两位老同学请我做的AIGC老照片怀旧视频。我只要能抽出空,非常乐于为亲友提供这种情绪价值,因为很开心看到他们的惊喜。

现在刘老师可是世界级钢琴大师,常在欧美中巡回演出。这是当年与我老同学孙兄在费城一起演奏练习的珍贵老照片。

吹拉弹唱无所不能的NLP老司机白硕老师评论说:会拉沉思的表示,弓法指法都不对。前微软NLP老友吴兄也留言反映视觉模型不懂音乐:“這個有待改進哦!一看就是不會拉小提琴和彈鋼琴的人做的/:,@P。音樂和弓子的運行差太多,第一個音是有兩拍半的長音,應該給個長弓才對, 另外鋼琴伴奏的右腳永遠不會這樣翹著,或抖動著。他的右腳應該在 sustain pedal 上。”

LOL

生成的时候 虽然prompt里是标明了音乐的名字的,但在可见的将来,没有哪家模型能够真正做到音乐理解与演奏肢体动作细节的对齐。或可留作大模型AGI的极限挑战题(之一)吧,因为理论上如果有足够的对齐数据,根据联合训练的压缩理论,做到极致不同模态之间是可以对齐的。

如果客观世界模拟器/仿真器是视觉大模型的最终目标的话,当代的视觉大模型处于“对牛弹琴”和“对音乐盲弹琴”的段位,绝对经不起音乐家的检验。譬如,我这样的乐盲,看上面的怀旧演奏视频,就不会像音乐行家那样一眼看出破绽,反而觉得惟妙惟肖,情绪拉满。

当然,音乐家的标准也许就是个伪需求伪目标(让挑剔的“专家眼”满意了视觉细节又如何?能大卖么),也许并不真值得费力追求。但理论上,理想的 AGI 都应该搞定这些专家水平的要求。

 

 

立委论LLM:什么是AI刚需

什么是刚需?

 

朋友圈看到一个鸡汤:“穷人说,有钱就是幸福;盲人说,能看见就是幸福;乞丐说,有饭吃就是幸福;病人说,能活着就是幸福;单身说,有伴侣就是幸福。...". 可以诠释用户刚需。

让穷人富起来是刚需,当然很多时候,因为它是刚需,不乏人类饥不择食的本性,刚需者成为被割的韭菜。

吃饱肚子是致富的子集,是最刚的需,因此AGI路上最应该实现的是UBI,不能让资本家独享技术进步和裁员的红利。

让盲人重见光明是让病人痊愈的刚需之最,二者说明,医疗无论多么艰难,仍然是最诱人的垂直领域。

以上属于吃穿住行生老病死的低层次刚需。

但最后,“脱单”却是高低层次融合(物理和精神陪伴:原始性吸引+情绪价值)的刚需。这个刚需恰好是 LLM native 的 sweet spot,不会因为 character.ai 商业化暂时受挫而失去前途。

以前说过,能让一个人感觉成功脱单,对提供商是功德,对用户是砸锅卖铁也心甘情愿的生活品质提升:绝对有付费意愿。

(也许是年轻脆弱多愁善感吧,孤独飘零在大北京,形单影只,当时既没有互联网,也没有游戏,生活中也见不到找到可心女孩的希望。在脱单前记得年轻做过多次噩梦,以为一辈子要打光棍了,不寒而栗,刻骨铭心。)

长期看,老年陪护、心理therapy等都属于 LLM native 的刚需,找到切入点,排除万难做到底做好的赢家,不仅是赚钱,而且是功德。

这些应该属于第一性原则的思考。

The Challenge of Character Consistency in Video Generation

Facial recognition in the vast world of AI is a specialized and challenging task, as human eyes are exceptionally sensitive to facial features. Because facial recognition is so specialized and sensitive, it presents a much greater challenge than traditional image recognition tasks, like identifying animal types. Consequently, this field achieved breakthroughs earlier than others: even before the advent of contemporary large models such as GPTs, deep neural network-based facial recognition, powered by extensive datasets of facial images, had already surpassed human visual capabilities and sensitivity. It became widely adopted, leading to the rise of unicorns in the pre-large model era.

Now, as we transition to universal video foundation models that aim to handle all objects in the world, whether it's Sora or Keling, maintaining facial consistency remains a significant challenge. The public has little access to Sora, but by examining similar leading visual models like Keling, we can perceive its limitations. Typically, after about half a minute, the generated faces start to diverge, no longer resembling the original person as closely. Achieving long-term consistency in character appearance is difficult without specialized processing and targeted optimization; relying solely on the current general video consistency training efforts is unlikely to overcome this bottleneck. This limitation has been repeatedly observed during various tests with publicly available visual products like Keling.

In some videos, if not for the sensitivity of human eyes, different visuals might be impossible from a purely physical perspective. This highlights the sharpness of human perception: the ability to instantly discern the real from the fake.

For example, in the videos generated below featuring Maria (Xiao Ya, the favorite text2image IP I have generated and maintained in my AIGC videos), her fans can immediately tell which one is genuine, even though Maria herself may present different appearances at different ages and in various settings. There exists an abstract, invariant facial characteristic that equips humans with an eagle-eyed ability to recognize faces. The secret to this lies in the decoupling of these characteristics already pretty well done in the previous generation of facial recognition models. Compare and contrast:

 

 

It's important to note that maintaining character consistency is a critical benchmark for generating cinematic and user-configurable video works. Without crossing this threshold, the field will struggle to achieve large-scale applications in video art creation. The dream of a fully virtual Hollywood production line, without physical filming, will remain a fantasy.

Why is it so difficult for visual models to achieve consistent character representation over long periods using brute force?

Video is a high-dimensional modality, and for large models (at least in the foreseeable future) to handle video, they must employ significant "lossy compression". The compression ratio of visual tokens is high, making it more feasible to align training/generation across the entire frames over time within the hidden space. The higher the compression ratio, the stronger the temporal consistency across the entire frames. Self-regressive models (GPT-like) or DiT (Diffusion Transformers) can achieve this. By doing so, videos that violate the physical laws of the real world can be effectively under control, reducing illogical hallucinations and making visual models appear to simulate the objective world (or so it seems). However, there is a trade-off: under lossy compression, the consistency of the overall frames and the consistency of detailed features of specific physical objects therein cannot be optimized simultaneously.

The current approach typically involves adding a super-resolution (SR) module/mdoel after achieving overall contour (blueprint) consistency, attempting to restore discarded details. In general, super-resolution rendering has made significant progress so far, thanks to the accumulation of research in "deepfake"-like technology. However, deepfake technology essentially compensates for the losses incurred during compression, using the large visual foundation model's strength in imagination (or "hallucination") to reasonably and non-deterministically fill in the details, depicting how the world "should" look like, what it should be, rather than what it is, often with amazingly detailed lifelike results. But if the goal is to represent an individual entity, especially a finely detailed one like a human face of some IP, with individual features sensitive to human perception, it's inevitable that the generated image will drift over time. This is the crux of the problem. The solution should not rely on increasingly larger models and longer context windows with brute-force data and training. Brute force can only slow the deviation but cannot eliminate the non-deterministic bias that accumulates during the SR process over long video sequences. We need to think outside the box and exclude the time dimension as a factor, using a step-by-step alignment method, which may break the time cycle. I’ll stop here—don't say you weren't warned.

The prerequisite for achieving this is the decoupling of facial features. Features that cannot be decoupled cannot be aligned step by step. They have to, and can, be decoupled; otherwise, it would be impossible to explain how dozens of Hollywood actors can star in thousands of blockbuster films. The decoupling of faces from expressions and time still has room for improvement, but the technology has already matured considerably.  It is a matter of how to properly use it in the process.

Original Chinese post in

立委论LLM:视频生成的人物一致性问题

大千世界人脸识别是一个专修而艰难的任务,因为人眼对于人脸的特征有非常的敏感。正因为人脸识别专门而敏感,比起识别阿猫阿狗的传统图像识别挑战性非同日而语,结果这个赛道首先得到突破:甚至在大模型之前,深度神经人脸识别得力于极为丰富的人脸图像数据,就已经超过了人类肉眼的识别能力和敏感度,也率先得到了广泛应用,成就了前大模型一代的独角兽。

到了面向大千世界一切对象的万用视频大模型的时候,无论Sora还是可灵,人脸的一致性依然是一个巨大挑战。Sora 不公测不大知道,但通过类似design的国内头部视觉大模型可灵,可以感知里面的局限性。通常超过半分钟,人脸就开始发散,变得不像那个人了。长程人物形象的一致性,如果不做专项处理和引入专项优化,单靠现在通用的视频一致性的训练努力,是很难突破瓶颈的。这个局限,在玩可灵一类公开上线的产品时,各种测试一再被发现。

有些视频如果不是人眼的敏感性(难怪“外贸”协会的求偶者那么多,原来人眼揉不得沙子,尤其是找对象的时候,普通人很难粗粒度容忍潜在对象的面相瑕疵或因为面相的不来电 lol),哪怕从视觉物理特性看其实已经无法区分。可见人的眼毒: 可以立即分辨真假李逵。

例如,下面两个小雅的视频生成,小雅粉丝一眼就能辨真假,虽然小雅本人也有不同的年龄,不同的场合,会呈现不同的形象,但有一种可以抽象出来的不变的人脸特征在武装着人类对于人脸的火眼金睛。而这一切的密码其实在上一代人脸识别的模型里面已经解耦。

注意:人物形象一致性是电影性和可配置的视频作品生成的关键指针。不跨过这道门槛,这个赛道会难以规模化落地。那些梦想空手套白狼、无需物理拍摄、全程虚拟数字化的大模型好莱坞生产线,也就成为泡影。

为什么视觉大模型靠蛮力很难在人物长程一致性上奏效?、因为视频是模态中维度很高的形态,大模型(至少在可见的将来)为了搞定视频, 就必须做大力的有损压缩。视觉的 tokens 压缩比很高,这样在内部的 hidden space 做整体帧的长程一致性的对齐训练/生成才比较可行。压缩旅越高,总体画面的时间维度的一致性就越强。自回归或DiT就可以搞定。只有这样,违背大千世界物理规律的视频就会得到有效控制,减少违背常识的幻觉,让视觉大模型看上去已然是客观世界的模拟器(疑似)了。但这里有个矛盾,在有损压缩的条件下,帧总体的一致性与其中具体物理对象的细节特征的一致性,没办法同步优化一致性。

目前的方案通常是,在总体轮廓(blueprint)一致性搞定后,追加一个高清化(SR)的过程,试图把舍弃的细节复现出来。高清化渲染,总体而言在过去几年的 deep fake 研发积累中,已经做得相当不错了。但是 deep fake 本质上是在有损压缩的条件下的亡羊补牢,它所能做的就是通过大模型所擅长的想象(或曰幻觉)来合理地、非确定性地填补细节,描绘世界应该具有的形象(what it should be,Not what it is),可以栩栩如生。但如果目标是一个特定对象,尤其是人脸这种细粒度对象,有人眼敏感的个体特征(IP),它就免不了在长时间的生成中有所走偏,这就是问题的要害所在。解决的办法不应该指望模型越来越大、context window 越来越长的大数据蛮干。因为蛮力只能减缓偏差,不能根治长视频的SR过程中随时间而增加的非确定性偏差积累。要 think out of box,排除时间维度作为条件,以步步对齐的方法,或可解套。点到为止吧 ,勿谓言之不预。

做到这点的前提是人脸可以解耦。无法解耦的特征做不到步步对齐。一定是可以解耦的,否则无法说明好莱坞几十个名演员可以演几千部大片。人脸与表情和时间的解偶当然还有进一步的提升空间,但技术已经比较成熟了。

 

Related Links:

Ilya Sutskever:你会 bet against AI 吗?

挥刀董事会,OpenAI 如何解决致命难题

英伟达GTC的一小步,AI进化史的一大步

为什么说 Sora 是世界的模拟器?

Llama 3 Notes and Llama MV with Llama 3.1 Legend

Notes on the 92-page Paper Released with Meta's Super Large Model Llama 3.1

The super-large model Llama 3.1 is a milestone in the open-source large model community. As a leader, Meta's project involved over 500 participants/contributors (the authors of this paper are listed alphabetically in the appendix, similar to how the Central Committee members' names are displayed by stroke order). This original text is full of implementation details:

meta Llama 3.1 paper

AIGC MV using Suno and keling (just for fun & cheering opensource milestone)

Notes:

  1. Llama 3.1 doesn't use sparse techniques, it's not a multi-expert system like model 4, but a dense model.
  2. 405B parameters, 15.6T tokens: The number of tokens is 40 times the number of parameters. Large-scale top models now emphasize data growth far exceeding parameter growth. Is this 15T tokens of data open source? (No, because even if they were willing to open source it, they wouldn't dare, as it could lead to countless data infringement lawsuits)
  3. Emphasizes three major levers for super-large foundation models: data, scale, and managing complexity.
  4. Compared to the previous generation system Llama 2, computational power has increased 50 times (using 3.8 × 10^25 FLOPs).
  5. Complexity management: (1) Choosing a standard dense Transformer architecture instead of a mixture of experts model to maximize training stability. (2) Adopting a relatively simple post-training procedure: Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO). In other words, algorithm design and implementation tend towards simplification. Not using sparse techniques and multi-expert systems is for stability (but training challenges are greater, though they're not afraid). Using simpler, easier-to-implement DPO in the post-training phase instead of reinforcement learning is also for stability, as reinforcement learning has always been difficult to handle.
  6. Benchmark tests cover: general, code, math, reasoning, tool use, long context, and multilingual. All performances are SOTA (state-of-the-art international level).
    • MMLU (Massive Multitask Language Understanding): 405B model achieves 87.3% (5-shot), 88.6% (0-shot, CoT).
    • Code generation (HumanEval): 405B model reaches 89.0%, close to GPT-4.
    • Math problems (GSM8K): 405B model achieves 96.8%, slightly higher than GPT-4.
    • Long context tasks: Excellent performance on some tasks, such as 95.2% on QuALITY.
    • Multilingual tasks (MGSM): 405B model reaches 91.6%, on par with top models. The 405B model is comparable or close to GPT-4 and Claude 3.5 Sonnet on many tasks. In short, open-source has caught up with closed-source.
  7. Pre-training started with an 8k window, expanded to a 128k window in the later stages of pre-training (continued training).
  8. After the foundation model pre-training was completed, multiple iterations of alignment "post-training" were performed. Including: (1) Aligning the model through human feedback, including multiple rounds of Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO); (2) Integrating new capabilities, such as tool use; (3) Enhancing coding and reasoning abilities (specialized optimization); (4) Safety alignment.
  9. Multimodal expansion (in progress, not yet released): Image, video, and speech capabilities. Including (1) Multimodal encoder pre-training: Image encoder trained on a large number of image-text pairs, aligning visual content and natural language in a unified space; (2) Speech self-training? (3) Experiments on video-text data alignment based on images.
  10. Language model as the core, other modalities are added later (whether added to pre-training and/or post-training). When expanding to multimodal, the language model parameters remain unchanged, adapting to multimodality, allowing multimodal alignment in the same semantic space, closer to the language model. In other words, Llama follows a modular, step-by-step approach to gradually expand to multimodality. This is not the mainstream approach (mainly referring to Open AI and Google, at least in theory) advocating for "unified multimodal native data joint pre-training". The overall impression of Llama's algorithmic strategies is seeking stability rather than innovation or unification. It tends towards practicality, not caring about leading in algorithms. For example, the integration of speech first involves speech self-training (because speech is actually very similar to text, both being language systems), then alignment between speech and text (including Automatic Speech Recognition ASR and Text-to-Speech TTS). Integrating step by step into the cross-modal large model, this approach isn't cutting-edge in terms of advancement, but it's steady progress, beneficial for engineering development, integration, and iteration. It's unclear when they will be able to release multimodal capabilities online.
  11. Data collection and cleaning work is very complex, but the Llama team is meticulous, which is also the data guarantee for its quality to catch up with SOTA. To recap: (1) De-duplication: URL-level de-duplication; Document-level de-duplication using MinHash algorithm; Row-level de-duplication: removing rows appearing more than 6 times every 30M documents. (2) Filtering: Removing low-quality documents, outliers, and excessively repetitive documents, using repetitive n-gram coverage to remove repetitive content (such as logs or error messages); using "dirty word" counts to filter adult websites not covered by blacklists; using token distribution KL divergence to filter documents with too many abnormal tokens. (3) Controlling data quality: Using fasttext classifier to identify text that might be cited by Wikipedia; using a Roberta-based classifier trained on Llama 2's predictions; using DistilRoberta to generate document quality scores. Also, fasttext language classifier can identify 176 languages; specially filtering two types of information: adult content and personal identity/privacy information. Special fine processing for code and math web pages.
  12. Data proportions: For example, downsampling over-represented data categories on the web (such as art and entertainment); data mixing ratios determined by a series of small model experiments, final data mix summary: About 50% of tokens correspond to general knowledge; 25% of tokens involve math and reasoning; 17% of tokens are code; 8% of tokens are multilingual content.
  13. Model architecture: Apart from empirical detail adjustments, the basic architecture of the dense model remains unchanged, so it's data and scaling that create top models. 405B model specific parameters: 126 layers; token representation dimension 16,384; 128 attention heads; model size of 405B determined according to scaling law, about the computational optimal size under 3.8 × 10^25 FLOPs training budget.
  14. Vocabulary: Using a vocabulary of 128K tokens. Combines 100K tokens from the tiktoken3 tokenizer and 28K additional multilingual tokens to better support non-English languages.
  15. Computing resources, including GPU clusters of tens of thousands of cards, massive storage, and high-speed networks, represent huge resource investments. Specific data as follows: Computing resources:
    • Used up to 16,000 H100 GPUs (a very powerful graphics processor).
    • Each GPU has 80GB of high-bandwidth memory, with a power of 700W.
    • These GPUs are installed on servers designed by Meta itself, with 8 GPUs and 2 CPUs per server. Storage system:
    • Uses a distributed file system called Tectonic.
    • Provides 240PB (1PB=1000TB) of storage space, distributed across 7,500 servers.
    • Can process 2TB of continuous data per second, with a peak of 7TB/second.
    • A major challenge is handling the large amount of burst writes generated when processing model checkpoints (the process of saving model states).
  16. Three-step pre-training process: a) Initial pre-training; b) Long context continued pre-training; c) Annealing with high-quality data sources Key pre-training strategies:
    • Gradually increase batch size and sequence length to balance stability and efficiency.
    • Dynamically adjust data mixing to specifically enhance certain capabilities.
    • Increase context length in stages to avoid early computational overhead.
    • Use annealing and high-quality data in the late stages of training to fine-tune model performance.

 

[LLM Summary]

Llama 3: Meta's Open-Source Large Language Model Breakthrough**

1. Introduction and Overview

Meta has introduced Llama 3, a series of foundation language models designed to support various tasks including multilingual processing, programming, reasoning, and tool use. This model series includes versions with 8B, 70B, and 405B parameters, with the largest 405B parameter model adopting a dense Transformer architecture and supporting context windows of up to 128K tokens. The development of Llama 3 highlights three key factors: data quality and scale, computational scale, and complexity management.

2. Model Architecture and Pre-training Strategy

2.1 Model Architecture

Llama 3 retains the standard dense Transformer architecture rather than adopting a mixture of experts model. This choice aims to maximize training stability, reflecting Meta's emphasis on simplifying design to manage complexity. Key architectural improvements include:
- Using Grouped-Query Attention (GQA) mechanism, with 8 key-value heads per attention layer.
- Introducing attention masks to prevent self-attention between different documents in the same sequence.
- Expanding the vocabulary to 128K tokens, combining 100K tokens from the tiktoken3 tokenizer and 28K additional multilingual tokens.
- Increasing the RoPE base frequency hyperparameter to 500,000 to support longer contexts.

2.2 Pre-training Data Processing

Llama 3's pre-training data processing is extremely rigorous, including:
- Multi-level deduplication: URL-level, document-level (using MinHash algorithm), and row-level deduplication.
- Heuristic filtering: Removing low-quality documents, outliers, and excessively repetitive content.
- Model-based quality filtering: Using fasttext and Roberta-based classifiers for quality assessment.
- Special content processing: Developing specialized processing pipelines for code and mathematical content.
- Multilingual data processing: Using fasttext base language identification model, supporting 176 languages.
- Safety and privacy protection: Filtering website data containing personally identifiable information (PII) and unsafe content.

2.3 Pre-training Strategy

The pre-training process is divided into three main stages:
1. Initial pre-training: Conducted on about 15T multilingual tokens, far exceeding Llama 2's 1.8T tokens.
2. Long context pre-training: Gradually expanding from initial 8K tokens to 128K tokens context window.
3. Annealing phase: Fine-tuning with high-quality data in the final stage, using Polyak averaging to generate the final model.

Data mixing ratios are carefully designed:
- 50% general knowledge
- 25% mathematics and reasoning
- 17% code
- 8% multilingual content

3. Training Infrastructure and Challenges

3.1 Computational Resources
- Using up to 16K H100 GPUs, each equipped with 80GB HBM3 memory.
- Adopting a 4D parallel strategy: tensor parallelism, pipeline parallelism, context parallelism, and data parallelism.

3.2 Storage System
- Using the Tectonic distributed file system, providing 240PB of storage space.
- Supporting 2TB/s sustained throughput, with peak capacity of 7TB/s.

3.3 Network Optimization
- Developing the NCCLX communication library to improve network efficiency.
- Designing specific network topologies and load balancing strategies.

3.4 Training Challenges
- Experiencing 466 job interruptions during the 54-day training period, 419 of which were unexpected.
- Developing automated systems and specialized tools to handle hardware failures and network issues.

4. Post-training and Alignment

Llama 3 adopts a multi-round iterative post-training process, including:
1. Supervised Fine-Tuning (SFT)
2. Direct Preference Optimization (DPO)
3. Reward model training: Using human feedback data
4. Safety alignment: Implementing multiple rounds of safety measures

This process not only improves the model's instruction-following capabilities but also enhances safety and specific abilities (such as coding and reasoning).

5. Multimodal Expansion

Although not officially released yet, Llama 3 demonstrates promising multimodal capabilities:
- Image recognition: Training independent image encoders, integrated with the language model through adapters.
- Video understanding: Adding video adapters based on image adapters.
- Speech processing: Independently training speech encoders, then aligning with the language model.

This modular approach allows flexible addition of new modalities while maintaining core language capabilities.

6. Performance Evaluation

Llama 3 performs excellently in multiple benchmark tests:
- MMLU (5-shot): 87.3%
- HumanEval (code generation): 89.0%
- GSM8K (math problems): 96.8%
- Long context tasks (like QuALITY): 95.2%
- MGSM (multilingual tasks): 91.6%

These results indicate that Llama 3 405B is comparable or close to GPT-4 and Claude 3.5 Sonnet on multiple tasks, particularly excelling in document understanding and long context tasks.

7. Safety Considerations

Meta highly prioritizes safety in the development of Llama 3:
- Implementing strict safety measures in both pre-training and post-training stages.
- Developing the Llama Guard system-level safety solution.
- Conducting extensive red team testing and risk assessments.

8. Open Source Impact and Future Directions

Meta's decision to publicly release the entire Llama 3 series, including the 405B parameter version, may have far-reaching impacts on the AI research community:
- Promoting open, responsible AI development.
- Accelerating AGI research progress.
- Providing researchers with opportunities to examine and improve large-scale language models.

Future development directions may include:
- Further improving multimodal integration.
- Expanding context length.
- Continuously enhancing data quality and model scale.

9. Conclusion

The development of Llama 3 demonstrates Meta's deep experience and forward-thinking in large-scale AI systems. By focusing on three key levers - data quality, computational scale, and complexity management - Llama 3 has reached or approached the current state-of-the-art level on several key benchmarks. Its open-source release may drive a wave of innovation across the entire AI field, paving the way for responsible AGI development.

Llama 3: Meta's AI Chef's Latest "Divine Delicacy"

Attention, all tech enthusiasts! The Michelin three-star AI chef Meta has just unveiled a new dish! This divine delicacy named "Llama 3" is not only spicy enough but will elevate your taste buds to new heights!

1. The Chef's Secret Weapon

Imagine Llama 3 as a super nanny who speaks 8 languages, writes code, does math, and can be your personal assistant. She can handle a kindergarten full of rambunctious kids (8B version), manage a mid-sized company (70B version), or even govern a small country (405B version)! This 405B big sister can remember 128,000 "gossips" (oh no, I mean context) simultaneously, essentially a walking encyclopedia + supercomputer!

2. Ingredient Selection: Only the Freshest!

Llama 3's chefs are masters at picking ingredients:

  • They "fished" 15 trillion words from the internet, nearly 10 times more than the previous generation!
  • Half of these words are everyday life seasonings, a quarter are math problems and brain teasers, nearly a fifth are programmer spells, and the rest are dialects learned from world travels.
  • They even invented a super weed remover, filtering out all the online garbage, repetitive, and unhealthy stuff.

3. Cooking Process: Three-Step Stir-Fry Method

Step 1: "Slow Simmer" - Start with a regular stove (8K context) to cook it halfway. Step 2: "High Heat Stir-Fry" - Switch to a super stove (gradually increasing to 128K context), reducing the sauce to be thick and fragrant. Step 3: "Low Heat Finish" - Finally, a gentle simmer with the best ingredients, the legendary "annealing" (even the chefs don't know why it's called that), bringing the flavor to its peak!

4. Kitchen Equipment: Top-of-the-Line Luxury Version

  • 16,000 super high-power induction cookers (H100 GPUs) firing simultaneously!
  • A refrigerator that could fit half the Pacific Ocean (240PB storage)!
  • A proprietary ingredient prep system faster than 5G (NCCLX communication library)!

Imagine all these stoves firing at once, making the kitchen feel like a sauna. But our chefs persevered through the heat, changing chef uniforms 466 times in 54 days to whip up this dish!

5. Training Method: Both Cute and Well-Mannered

Being a good cook isn't enough; you've got to have manners too! So our chefs began a long "training" process:

  • First came a round of "gentle education" (supervised fine-tuning)
  • Then the "carrot and stick" tactic (direct preference optimization)
  • Finally, they invited moral role models (safety alignment) for guidance

After all this fuss, Llama 3 not only cooks well but also knows how to please people, program, do math, and mind her manners - a true decathlon champion!

6. Special Side Dishes: Showcasing Multiple Talents

Don't think Llama 3 can only cook; she's a multi-talented "goddess":

  • Storytelling from images? Piece of cake!
  • Writing movie reviews? No problem!
  • Recognizing songs and even singing a bit? The karaoke queen!

Although these "talents" are still in practice, they already show the potential of Li Bai's "from black hair to snow white in a day"!

7. A True Powerhouse: Dazzling Test Scores

Llama 3 participated in a series of "Top Chef Competitions," with eye-popping scores:

  • College Entrance Exam (MMLU): 87.3 points (out of 100)
  • Programmer Interview (HumanEval): 89 points (out of 100)
  • Math Olympiad (GSM8K): 96.8 points (out of 100)
  • Long Novel Reading Comprehension (QuALITY): 95.2 points (out of 100)

Bring this report card home, and even a "Tiger Mom" would be grinning from ear to ear!

8. Safety First: AI's "Security Captain"

Meta's chefs know well the principle of "don't leave guns and ammo lying around." They've assigned Llama 3 a 24/7 bodyguard team (Llama Guard) to prevent her from accidentally saying or doing the wrong thing. They even arrange occasional "moral exams" to ensure she doesn't turn into a "Terminator."

9. Open Source Feast: Everyone Can Be a Master Chef!

The most impressive part is that Meta decided to make the recipe for this "divine delicacy" completely public! It's like a Michelin three-star restaurant putting their signature dish's recipe online. Now anyone who wants to can whip it up at home! This move not only shocked other master chefs but also made countless food lovers cheer with joy!

10. Future Outlook: Reaching New Heights

Meta's chefs aren't resting on their laurels; they're already pondering the next "divine delicacy":

  • Maybe a dancing Llama 4?
  • Or a painting Llama 5?
  • Who knows, one day we might see a Llama 6 composing symphonies!

In short, the AI world's "Michelin" journey has only just begun!

Epilogue

The birth of Llama 3 not only elevates Meta's status in the AI world but also brings a fresh breeze to the entire AI research community. This bowl of "Llama soup" is not only delicious but also brings unlimited imagination to everyone. What will the future of AI be like? Let's wait and see what flavor the next "divine delicacy" will be!

辣妈3笔记 以及辣妈MV与辣妈3.1传奇

Meta超大模型辣妈3.1 发布时的92页论文读书笔记

超大模型 辣妈3.1 是大模型开源社区的里程碑。作为 leader,Meta 该项目的参与者/贡献者有 500 多人(这篇论文的作者署名作为附录按字母顺序排列,跟中央委员姓氏笔划公示似的)。这是充满了实现细节描述的原文

meta Llama 3.1 paper

AIGC MV (just for fun & cheering opensource milestone)

【立委笔记】

1. 辣妈3.1 没有用稀疏技术,不是 model 4 那种多专家系统,而是一个 dense model

2. 405B参数,15.6T tokens:token 数是参数的 40 倍

超大规模头部模型现在强调的都是数据量远大于参数量的增长,这 15T tokens 的数据也是开源的吗?(不会,因为即便愿意开源,也没这个胆子,可能会引起无数数据侵权的官司)

3. 强调超大基础模型的三大杠杆:data, scale, and managing complexity.

4. 比上一代系统辣妈2,计算量增加了 50 倍 (using 3.8 × 1025 FLOPs)

5. 复杂性管理:(1)选择标准的密集Transformer架构,而非混合专家模型,以最大化训练稳定性。(2)采用相对简单的后训练程序:监督微调(SFT)、拒绝采样(RS)和直接偏好优化(DPO)。就是说,算法设计和实现上,趋于简单化。不利用稀疏技术和分流多专家系统,为的是稳定性(但训练挑战更大,但他们不怕)。后训练阶段用更简单、容易实现的DPO,而不用强化学习,也是为了稳定性,因为强化学习从来就不好缠。

6 基准测试涵盖:通用、代码、数学、推理、工具使用、长上下文和多语言。全部表现为 SOTA(国际先进水平)。

MMLU(大规模多任务语言理解):405B模型达到87.3%(5-shot),88.6%(0-shot,CoT)。
代码生成(HumanEval):405B模型达到89.0%,接近GPT-4。
数学问题(GSM8K):405B模型达到96.8%,略高于GPT-4。
长上下文任务:在某些任务上表现优异,如QuALITY达到95.2%。
多语言任务(MGSM):405B模型达到91.6%,与顶级模型持平。

405B模型在许多任务上与GPT-4和Claude 3.5 Sonnet相当或接近。一句话,开源追平了闭源。

7. 预训练开始是 8k 窗口,在预训练后期(继续训练)时候扩展到了 128k 窗口。

8. 基础模型预训练完成后,经过多次迭代的对齐“后训练”。

包括:(1)通过人类反馈对齐模型,包括多轮的监督微调(SFT)和直接偏好优化(DPO);(2)集成新能力,如工具使用;(3)增强编码和推理能力(专项优化);(4)安全对齐。

9. 多模态扩展(进行中,未上线发布):图像、视频和语音能力。

包括 (1)多模态编码器预训练:图像编码器在大量图像-文本对上训练,统一空间里对齐视觉内容和自然语言;(2)语音独自自训练?(3) 在图片基础上,进一步作视频-文本数据对齐的实验。

10. 语言模型为核心,其他模态都是后加的(无论加到预训练 and/or 后训练)。

在扩展到多模态的时候,语言模型保持参数不变,适配的是多模态,让多模态在同一个意义空间对齐、靠近语言模型。换句话说,辣妈是遵循模块化、步骤化的方式逐渐扩展到多模态。而不是采用主流(主流主要指 Open AI 和谷歌,至少在理论上)倡导的“大一统的多模态原生数据的联合预训练”。

辣妈的所有算法策略,总的印象是求稳,而不是求创新 或 求大一统。偏向于务实,不在乎自己的算法领先。例如语音的集成,先是语音的自训练(因为语音与文字其实很类似,都是语言体系),然后是语音与文字的对齐(包括语音识别 ASR 和语音合成 TTS)。一步一步集成进跨模态的大模型,这方面谈不上先进性,但一步一个脚印,有利于工程化的开发、集成和迭代。不知道他们什么时候可以上线发布多模态的能力?

11. 数据收集清洗工作非常繁杂,但辣妈团队一丝不苟,这也是它质量可以追平 SOTA 的数据保证。

盘点一下:

(1)去重(De-duplication):URL级别去重; 文档级别去重:使用MinHash算法;行级别去重:每30M文档移除出现超过6次的行。
(2)过滤:移除低质量文档、异常值和过度重复的文档,使用重复n-gram覆盖率移除重复内容(如日志或错误消息);使用"脏词"计数过滤未被黑名单覆盖的成人网站;使用token分布KL散度过滤含异常token过多的文档。
(3) 控制数据品质:使用fasttext分类器识别可能被维基百科引用的文本;使用基于Roberta的分类器,该分类器基于Llama 2的预测进行训练;使用DistilRoberta生成文档质量分数。还有 fasttext 的语言分类器,可识别176种语言;特别过滤两类信息:黄色;个人身份等隐私信息。对代码和数学的网页做特别的精细处理。

12. 数据比例:例如,对网络上过度表示的数据类别(如艺术和娱乐)进行降采样;数据混合比例由一系列小模型实验决定比例取舍,最终数据混合摘要:

约50%的token对应一般知识;25%的token涉及数学和推理;17%的token是代码;8%的token是多语言内容。

13. 模型架构:除了经验性细节调整,dense 模型的基本架构不变,所以是数据和规模化造就了头部模型。

405B模型具体参数:126层;token表示维度16,384;128个注意力头;根据scaling law决定模型大小为405B,约为3.8 × 10^25 FLOPs训练预算下的计算最优size。

14. 词汇表:使用128K个token的词汇表。结合了tiktoken3分词器的100K个token和28K个额外的多语言tokens,以更好地支持非英语语言。

15. 计算机资源,包括GPU万卡集群、海量存储和高速网络,巨大的资源投入。具体数据如下:

计算资源:
使用了多达16,000个H100 GPU(一种非常强大的图形处理器)。
每个GPU都有80GB的高带宽内存,功率为700W。
这些GPU被安装在Meta自己设计的服务器上,每个服务器有8个GPU和2个CPU。

存储系统:
使用了一个叫Tectonic的分布式文件系统。
提供了240PB(1PB=1000TB)的存储空间,分布在7,500台服务器上。
可以每秒处理2TB的持续数据,峰值可达7TB/秒。
一个主要挑战是处理模型检查点(保存模型状态的过程)时产生的大量突发写入。

16. 三步预训练过程:a) 初始预训练;b) 长上下文继续预训练;c) 用高质量数据源退火(Annealing)

预训练关键策略:
逐步增加批量大小和序列长度,以平衡稳定性和效率。
动态调整数据混合,针对性地提升特定能力。
分阶段增加上下文长度,避免早期的计算开销。
在训练后期使用退火和高质量数据,微调模型性能。

———————————

【LLM摘要】Llama 3:Meta的开源大模型突破

1. 引言与概述

Meta公司推出的Llama 3是一系列基础语言模型,旨在支持多语言处理、编程、推理和工具使用等多种任务。这个模型系列包括8B、70B和405B参数三个版本,其中最大的405B参数模型采用密集Transformer架构,支持多达128K tokens的上下文窗口。Llama 3的开发突显了三个关键因素:数据质量与规模、计算规模、以及复杂性管理。

2. 模型架构与预训练策略

2.1 模型架构
Llama 3沿用了标准的密集Transformer架构,而非采用混合专家模型。这一选择旨在最大化训练稳定性,体现了Meta对简化设计以管理复杂性的重视。主要的架构改进包括:
- 使用分组查询注意力(GQA)机制,每个注意力层有8个键值头。
- 引入防止同一序列中不同文档之间自注意力的注意力掩码。
- 扩展词汇表至128K tokens,结合了tiktoken3分词器的100K tokens和28K个额外多语言tokens。
- 将RoPE基频超参数增加到500,000,以支持更长的上下文。

2.2 预训练数据处理
Llama 3的预训练数据处理极为严格,包括:
- 多层次去重:URL级、文档级(使用MinHash算法)和行级去重。
- 启发式过滤:移除低质量文档、异常值和过度重复内容。
- 模型基础质量过滤:使用fasttext和基于Roberta的分类器进行质量评估。
- 特殊内容处理:为代码和数学内容开发专门的处理流程。
- 多语言数据处理:使用fasttext基础语言识别模型,支持176种语言。
- 安全与隐私保护:过滤包含个人可识别信息(PII)和不安全内容的网站数据。

2.3 预训练策略
预训练过程分为三个主要阶段:
1. 初始预训练:在约15T多语言tokens上进行,远超Llama 2的1.8T tokens。
2. 长上下文预训练:从初始的8K tokens逐步扩展到128K tokens的上下文窗口。
3. 退火阶段:在最后阶段使用高质量数据进行微调,并采用Polyak平均法生成最终模型。

数据混合比例经过精心设计:
- 50%通用知识
- 25%数学和推理
- 17%代码
- 8%多语言内容

3. 训练基础设施与挑战

3.1 计算资源
- 使用多达16K个H100 GPUs,每个GPU配备80GB HBM3内存。
- 采用4D并行策略:张量并行、流水线并行、上下文并行和数据并行。

3.2 存储系统
- 使用Tectonic分布式文件系统,提供240PB存储空间。
- 支持2TB/s的持续吞吐量,峰值可达7TB/s。

3.3 网络优化
- 开发NCCLX通信库,提高网络效率。
- 设计特定的网络拓扑和负载均衡策略。

3.4 训练挑战
- 在54天训练期间经历466次作业中断,其中419次为意外中断。
- 开发自动化系统和专门工具处理硬件故障和网络问题。

4. 后训练与对齐

Llama 3采用多轮迭代的后训练过程,包括:
1. 监督微调(SFT)
2. 直接偏好优化(DPO)
3. 奖励模型训练:使用人类反馈数据
4. 安全对齐:实施多轮安全措施

这一过程不仅提升了模型的指令遵循能力,还增强了安全性和特定能力(如编码和推理)。

5. 多模态扩展

虽然尚未正式发布,但Llama 3展示了promising的多模态能力:
- 图像识别:训练独立的图像编码器,通过适配器与语言模型集成。
- 视频理解:在图像适配器基础上添加视频适配器。
- 语音处理:独立训练语音编码器,然后与语言模型对齐。

这种模块化方法允许灵活添加新模态,同时保持核心语言能力不变。

6. 性能评估

Llama 3在多项基准测试中表现卓越:
- MMLU(5-shot):87.3%
- HumanEval(代码生成):89.0%
- GSM8K(数学问题):96.8%
- 长上下文任务(如QuALITY):95.2%
- MGSM(多语言任务):91.6%

这些结果表明Llama 3 405B在多个任务上与GPT-4和Claude 3.5 Sonnet相当或接近,特别是在文档理解和长上下文任务上表现突出。

7. 安全性考虑

Meta在Llama 3的开发过程中高度重视安全性:
- 预训练和后训练阶段均实施严格的安全措施。
- 开发Llama Guard系统级安全解决方案。
- 进行广泛的红队测试和风险评估。

8. 开源影响与未来方向

Meta决定公开发布Llama 3全系列模型,包括405B参数版本,这一决定可能对AI研究社区产生深远影响:
- 推动开放、负责任的AI发展。
- 加速AGI的研究进程。
- 为研究者提供检验和改进大规模语言模型的机会。

未来发展方向可能包括:
- 进一步改进多模态集成。
- 扩展上下文长度。
- 持续提升数据质量和模型规模。

9. 结论

Llama 3的开发展示了Meta在大规模AI系统方面的深厚经验和前瞻性思考。通过聚焦数据质量、计算规模和复杂性管理这三个关键杠杆,Llama 3在多个关键基准上达到或接近了当前最先进水平。其开源发布可能会推动整个AI领域的创新浪潮,为负责任的AGI开发铺平道路。

———————————

老友说大模型做摘要千篇一律 看了等于没看。其实那是因为学术圈摘要 人类就是这么写的 难免有点八股 怪不得大模型。其实大模型擅长各种表述 端看你怎么提点它了。例如,指令: 根据上面的《辣妈3之歌》,讲一个风趣幽默但不离谱的故事:

好的,我来尝试用更加风趣生动、通俗易懂的方式重新描述Llama 3这个庞然大物:

辣妈3号:Meta的AI大厨烹饪的最新"神仙美味"

各位吃瓜群众请注意,AI界的米其林三星大厨Meta又出新菜了!这道名为"辣妈3号"的神仙美味,不仅辣得够味,还要把你的味蕾辣出新境界!

1. 主厨的秘密武器

想象一下,辣妈3号就像是一位会8国语言、能写代码、会算数、还能当你小助手的超级保姆。她不仅能照顾一个幼儿园的熊孩子(8B版本),还能管理一个中型公司(70B版本),甚至能治理一个小国家(405B版本)!这位405B大姐能同时记住12.8万个"流言蜚语"(哦不,是上下文),简直是行走的百科全书+超级计算机!

2. 食材选择:只选最新鲜的!

辣妈3号的主厨们可是挑食材的老手:
- 他们从网上"海捞"了15万亿个词,比上一代多了近10倍!
- 这些词里,一半是日常生活的调料,四分之一是数学题和脑筋急转弯,近五分之一是程序员的咒语,剩下的是环游世界学来的各国方言。
- 他们还发明了一个超级除草机,把网上的垃圾、重复的、不健康的统统都筛了出去。

3. 烹饪过程:三步炒作法

第一步:"小火慢炖"- 先用普通灶台(8K上下文)煮个半熟。
第二步:"大火爆炒"- 换成超级灶台(逐步加到128K上下文),把汤汁收得又浓又香。
第三步:"温火收尾"- 最后用最好的食材轻轻一熬,这就是传说中的"退火"(连厨师自己都不知道为啥叫这名),让味道达到巅峰!

4. 厨房设备:顶配豪华版

- 16000个超级大功率电磁炉(H100 GPU)同时开火!
- 一个能装下半个太平洋的冰箱(240PB存储)!
- 比5G还快的专属配菜系统(NCCLX通信库)!

想象一下,这么多灶台同时开火,厨房里热得跟桑拿房似的。但我们的大厨们愣是顶着高温,54天里换了466次厨师服,才把这道菜给炒出来!

5. 调教方法:既要软萌可爱,又要知书达理

光会做菜不行,还得懂礼貌啊!于是我们的厨师们开始了漫长的"调教"过程:
- 先是来了一轮"温柔教育"(监督微调)
- 接着是"棒棒糖加大棒"战术(直接偏好优化)
- 最后还请来了道德模范(安全对齐)来指导

经过这番折腾,辣妈3号不仅会做菜,还会哄人、会编程、会算数、懂礼貌,简直就是十项全能!

6. 特色小菜:多才多艺显身手

别以为辣妈3号只会做饭,她还是个多才多艺的"女神":
- 看图说故事?小菜一碟!
- 看视频写影评?不在话下!
- 听歌识曲还能唱两句?简直就是KTV麦霸!

虽然这些"才艺"还在练习中,但已经有几分李白"朝如青丝暮成雪"的潜质了!

7. 实力派:考试成绩单亮瞎眼

辣妈3号参加了一系列"星厨争霸赛",成绩简直亮瞎眼:
- 大学入学考试(MMLU):87.3分(满分100)
- 程序员面试(HumanEval):89分(满分100)
- 奥数比赛(GSM8K):96.8分(满分100)
- 长篇小说阅读理解(QuALITY):95.2分(满分100)

这成绩单拿回家,恐怕连"虎妈"都要笑得合不拢嘴!

8. 安全第一:AI界的"保安队长"

Meta的厨师们深知"枪支弹药不能乱放"的道理。他们给辣妈3号配了个24小时保镖团队(Llama Guard),防止她不小心说错话或做错事。还时不时安排几次"道德考试",确保她不会变成"终结者"。

9. 开源盛宴:人人都能当大厨!

最厉害的是,Meta决定把这道"神仙美味"的菜谱完全公开!这就相当于米其林三星餐厅把招牌菜的制作方法放到了网上。以后谁想做,在家就能整一道!这波操作,不仅让其他大厨们惊掉了下巴,还让无数吃货们欢呼雀跃!

10. 未来展望:更上一层楼

Meta的厨师们可没闲着,他们已经在琢磨下一道"神仙美味"了:
- 也许是会跳舞的辣妈4号?
- 或者会画画的辣妈5号?
- 搞不好哪天就冒出个会写交响乐的辣妈6号!

总之,AI界的"米其林"之路,才刚刚开始呢!

结语

辣妈3号的诞生,不仅让Meta在AI界的江湖地位更上一层楼,还为整个AI研究界带来了一股新风。这碗"辣妈汤",不仅好喝,还给大家带来了无限的想象空间。AI的未来会怎样?让我们拭目以待,看看下一道"神仙美味"会是什么滋味!

———————————

下面才是中规中矩、非常忠实的另外几个LLM论文摘要,出自头部大模型,角度和风格略有不同,可以对比着快速浏览,获得辣妈3的基本信息 。我核对过,觉得比人类写得好。

【摘要2】

Meta最新推出的Llama 3系列基础模型,旨在支持多语言任务、编程、推理和工具使用,其旗舰模型拥有4050亿参数,上下文窗口可达128K tokens。本文将深入探讨Llama 3的模型架构、训练过程、性能评估、多模态扩展、数据处理与质量控制、复杂性管理以及安全性等方面。

模型架构与训练

Llama 3采用了标准的密集Transformer架构,而非混合专家模型,以最大化训练稳定性。模型训练分为三个主要阶段:

  1. 初始预训练:使用大规模多语言文本语料库进行预训练,模型在此阶段学习语言结构和大量知识。初始上下文长度为8K tokens。
  2. 长上下文预训练:逐步增加上下文长度至128K tokens,以支持更长的输入和输出。
  3. 退火:对高质量数据进行上采样,并采用Polyak平均化方法生成最终预训练模型。

数据处理与质量控制

Llama 3的数据处理过程极其严格,包括大规模文本语料库清理、去重和过滤,以确保数据的高质量。具体步骤包括:

  1. PII和安全过滤:过滤掉可能包含个人可识别信息(PII)和不安全内容的网站数据。
  2. 文本提取和清理:使用自定义解析器从HTML内容中提取高质量文本,并保留数学和代码内容的结构。
  3. 去重:在URL、文档和行级别上进行多轮去重。
  4. 启发式过滤:通过启发式方法去除低质量文档、异常值和重复内容。
  5. 模型质量过滤:使用模型分类器对数据进行进一步的质量筛选,确保训练语料库的高质量。

最终数据混合比例为:50%通用知识,25%数学和推理,17%代码,8%多语言内容。

模型评估与性能

Llama 3在多个基准数据集上进行了评估,包括多语言理解、编程、推理等任务。其旗舰模型在多个关键基准上的性能接近甚至超过了GPT-4。具体来说:

  • 多语言能力:支持至少八种语言的问答和指令执行。
  • 编程能力:能够编写高质量代码,并在HumanEval等编程基准上表现优异。
  • 推理能力:在ARC Challenge等推理基准上表现出色。
  • 长上下文任务:在某些任务上表现优异,如QuALITY达到95.2%。
  • 多语言任务(MGSM):与顶级模型持平。

多模态扩展

Llama 3还进行了多模态扩展实验,整合了图像、视频和语音的识别能力。这些模型通过组合方法进行训练,显示出与当前最先进模型在图像、视频和语音任务上的竞争力。

复杂性管理与安全性

Llama 3在设计上做出了一些简化以管理复杂性,如采用标准的密集Transformer模型架构,而非混合专家模型,从而最大限度地提高训练的稳定性。此外,Llama 3引入了Llama Guard 3模型,用于输入和输出的安全性过滤,并在训练过程中实施多轮安全措施,以确保模型的安全性和可靠性。

开源与未来发展

Llama 3的所有模型将在更新的Llama 3社区许可证下公开发布,包括预训练和后训练版本的4050亿参数语言模型。通过公开发布旗舰模型,Meta希望推动研究社区的创新,并加速通向人工通用智能(AGI)的负责任发展。

总结

Llama 3展示了在多语言、编程、推理和工具使用等方面的显著改进。通过优化数据质量和多样性、扩大训练规模以及简化模型架构,Llama 3在多个关键基准上达到了或接近了当前最先进模型的性能。Llama 3的公开发布将对AI研究社区产生深远影响,并加速AGI的负责任发展。

【摘要3】

引言

现代人工智能系统的基础是基础模型。本综述介绍了Meta推出的新一代基础模型集——Llama 3。Llama 3是一组支持多语言、编程、推理和工具使用的语言模型,其最大模型包含4050亿参数,并支持最多128K的上下文窗口。本论文对Llama 3进行了广泛的实证评估,结果显示其在多种任务上的性能与当前领先的语言模型如GPT-4相当。

模型架构与训练

Llama 3采用标准的稠密Transformer架构,并在数据质量和多样性以及训练规模上进行了显著改进。模型分为三个主要阶段进行训练:

  1. 初始预训练:使用大规模多语言文本语料库对模型进行预训练,模型在此阶段学习语言结构并获得大量知识。初始上下文长度为8K tokens。
  2. 长上下文预训练:在初始预训练后,通过逐步增加上下文长度进行进一步预训练,最终支持128K tokens的上下文窗口。
  3. 退火:在最后的退火阶段,对高质量数据进行上采样,并采用Polyak平均化方法生成最终预训练模型。

Llama 3在设计上做出了一些简化以管理复杂性,如采用标准的稠密Transformer模型架构,而非混合专家模型,从而最大限度地提高训练的稳定性。后训练阶段采用相对简单的程序:监督微调(SFT)、拒绝抽样(RS)和直接偏好优化(DPO)。

模型评估与性能

Llama 3的模型被评估在多个基准数据集上,包括多语言理解、编程、推理等任务。其旗舰模型在多个关键基准上的性能接近甚至超过了GPT-4,具体表现如下:

  • 多语言能力:Llama 3支持至少八种语言的问答和指令执行,包括英语、德语、法语、意大利语、葡萄牙语、印地语、西班牙语和泰语。
  • 编程能力:能够编写高质量代码,并在HumanEval等编程基准上表现优异。
  • 推理能力:在ARC Challenge等推理基准上表现出色。
  • 长上下文任务:在某些任务上表现优异,如QuALITY达到95.2%。
  • 多语言任务:在MGSM测试中,405B模型达到91.6%,与顶级模型持平。

Llama 3还进行了多模态扩展实验,整合了图像、视频和语音的识别能力。这些模型通过组合方法进行训练,显示出与当前最先进模型在图像、视频和语音任务上的竞争力。

数据处理与质量控制

Llama 3的数据处理过程包括大规模的文本语料库清理、去重和过滤,以确保数据的高质量。具体步骤包括:

  1. PII和安全过滤:过滤掉可能包含个人可识别信息(PII)和不安全内容的网站数据。
  2. 文本提取和清理:使用自定义解析器从HTML内容中提取高质量文本,并保留数学和代码内容的结构。
  3. 去重:在URL、文档和行级别上进行多轮去重。
  4. 启发式过滤:通过启发式方法去除低质量文档、异常值和重复内容。
  5. 模型质量过滤:使用模型分类器对数据进行进一步的质量筛选,确保训练语料库的高质量。

复杂性管理

Llama 3在设计上做出了一些简化以管理复杂性,如采用标准的稠密Transformer模型架构,而非混合专家模型,从而最大限度地提高训练的稳定性。通过优化数据质量和多样性、扩大训练规模以及简化模型架构,Llama 3在多个关键基准上达到了或接近了当前最先进模型的性能。

安全性

Llama 3引入了Llama Guard 3模型,用于输入和输出的安全性过滤。该模型在训练过程中实施多轮安全措施,以确保模型的安全性和可靠性。

训练基础设施

Llama 3的训练在Meta的生产集群上进行,使用多达16,000个H100 GPUs,每个GPU具有80GB的高带宽内存,功率为700W。存储系统使用Tectonic分布式文件系统,提供240PB的存储空间,支持每秒2TB的持续数据处理能力,峰值可达7TB/秒。网络方面,使用RDMA over Converged Ethernet (RoCE) 和Nvidia Quantum2 Infiniband fabric。

后训练与对齐

Llama 3的后训练和对齐过程包括:

  1. 监督微调(SFT):使用人类反馈数据训练奖励模型,然后用于指导模型生成。
  2. 直接偏好优化(DPO):进一步对齐模型,使其更符合人类偏好。

通过多轮迭代,不断改进模型的指令遵循能力和安全性。

社区贡献与未来发展

Llama 3的所有模型将在更新的Llama 3社区许可证下公开发布,包括预训练和后训练版本的4050亿参数语言模型。通过公开发布旗舰模型,Meta希望推动研究社区的创新,并加速通向人工通用智能(AGI)的负责任发展。

结论

总体而言,Llama 3展示了在多语言、编程、推理和工具使用等方面的显著改进。通过优化数据质量和多样性、扩大训练规模以及简化模型架构,Llama 3在多个关键基准上达到了或接近了当前最先进模型的性能。通过公开发布如此大规模的模型,Meta展现了其推动开放、负责任的AI发展的决心,这可能会对整个AI领域产生深远影响。

【摘要4】

本文详细介绍了Meta推出的基础语言模型Llama 3。Llama 3是一组支持多语言、编程、推理和工具使用等多种任务的模型,其最大模型具有405B参数和128K token的上下文窗口。

Llama 3的开发基于数据、规模和复杂性管理三大关键支柱。在数据方面,其预训练数据量达约15T多语言tokens,远超Llama 2的1.8T tokens,并进行了严格的数据清洗和质量控制,包括PII和安全过滤、文本提取和清理、多轮去重、启发式过滤以及模型质量过滤等,数据混合比例为约50%通用知识、25%数学和推理、17%代码、8%多语言内容。

在规模上,旗舰语言模型使用3.8 × 10^25 FLOPs进行预训练,规模远大于以前的Llama模型。在管理复杂性方面,采用标准的密集Transformer架构和相对简单的后训练程序,如监督微调(SFT)、拒绝采样(RS)和直接偏好优化(DPO),以最大化训练稳定性。

Llama 3在多个基准数据集上进行了广泛评估,包括多语言理解、编程、推理等任务。其旗舰模型在许多任务上的表现与GPT - 4等领先语言模型相当或接近,在帮助性和无害性之间取得了更好的平衡。例如,在MMLU、HumanEval、GSM8K、QuALITY、MGSM等基准测试中表现出色。

此外,Llama 3还进行了多模态扩展实验,通过组合方法整合图像、视频和语音的识别能力。图像编码器在大量图像 - 文本对上训练,语音模块进行了自训练并与语言模型对齐,视频 - 文本数据也进行了对齐实验。这些模型仍在积极开发中,尚未准备好发布。

Llama 3的训练基础设施强大,使用多达16K个H100 GPUs,采用4D并行策略和专门的网络拓扑及负载均衡策略。训练过程中面临硬件挑战,但通过自动化系统和专门工具解决。在预训练和后训练阶段都实施了严格的安全措施,开发了Llama Guard系统级安全解决方案,并进行了广泛的红队测试和风险评估。

Meta公开发布了Llama 3模型,包括405B参数版本,这可能会推动AI研究社区的创新,加速负责任的AGI开发。未来可能会进一步提升模型能力,如改进多模态集成和扩展上下文长度,同时持续改进数据质量和模型规模。总的来说,Llama 3强调了数据质量、计算规模和复杂性管理在开发高质量基础模型中的重要性,其算法策略求稳务实,通过一丝不苟的数据收集清洗工作保证了模型质量。

 

What did Ilya see? -- secret behind success of LLMs

What did Ilya see?

-- looking closely into his historical Berkeley talk

by Wei Li, Jia Gao

Introduction

When Ilya Sutskever left OpenAI and re-emerged with his new company, SSI (Safe Superintelligence Inc.), the move was both surprising and expected—he bypassed AGI and directly aimed at SSI (Safe Superintelligence). He confidently declared: Superintelligence is imminent, and establishing safe superintelligence (SSI) is the most important technological issue of our time.

Ilya, a legend in the field of deep learning and AI, and the former true soul of OpenAI, was at the center of the dramatic internal shift, addressing the issue—effective acceleration versus super alignment. Why was Ilya so steadfast about "super alignment" amid the underlying AI values and strategic path debate? Even after the storm settled, the outside world continued to speculate: what did Ilya see that compelled him to join the board in making the decision to oust CEO Sam Altman? Ilya remained hidden until recently, when he left OpenAI, leading to the dissolution of his super alignment team and the creation of his new company.

What did he see behind the push for "safe intelligence"?

Back on October 3, 2023, Ilya gave a talk at UC Berkeley titled "A Theory of Unsupervised Learning." Though obscure and known to few, it is destined to be one of the most significant moments in AI history. This talk was a theoretical reflection and summary by a top expert in deep learning on the GPT model he pioneered, now famous worldwide. Ilya revealed the core principles of large models and vividly described his obsession with, and excitement over, independently understanding the mechanisms of unsupervised learning. Despite the complexity, the talk was brilliant and enlightening.

Until recently, Leopold Aschenbrenner, a former member of his super alignment team, published a 165-page article, "Situation Awareness," preliminarily revealing the shock and concerns within OpenAI over the exponential evolution of GPT models. This partly answered the question of what Ilya saw, but Ilya himself remained silent until his official re-emergence not long ago.

Reflecting on his "confessional" talk at Berkeley, we might glimpse his "moment of enlightenment" when facing potential superintelligence and understand his original intent for safe intelligence. It was a rare deep sharing by Ilya, attempting to convey essential message to the world. But did the world hear him?

1. Machine Learning: Supervised Learning and Unsupervised Learning

To accommodate readers with varying mathematical backgrounds, this blog aims to explain Ilya's historical presentation in an accessible language. Purely technical explanations can be skipped by non-technical readers without affecting the understanding of the presentation's main ideas.

Before diving in, let's review the basic concepts of machine learning. Machine learning is like having computers as students and humans as teachers. By providing computers with numerous "practice problems" and "answer keys," they slowly learn to solve problems. This is supervised learning. But can computers really learn from practice problems instead of merely memorizing them? Ilya assures us there's theoretical proof of this.

Imagine a sea of problems before you, each paired with a standard answer. This is the model's training data. Model training is like diligently solving these problems until most of them are correct, meaning low training error. But even an extensive problem set has its limits. When new problems arise, can the model still get them right? These new problems are the test data, akin to exams. Whether the model performs well depends on its test error rate.

Mathematics tells us that as long as the problem set is large enough, far exceeding the model's size, excellent performance on training problems (low training error) ensures good performance on test problems (low testing error). In other words, if the model trains well, it will do well in exams! This is the mathematical guarantee for supervised learning.

However, if the model merely memorizes without extraction, no matter how large its memory or how strong its "memory power," it lacks real adaptive learning ability (called "generalization ability"). Only when the model isn't too smart, it will be forced to extract the essence (called "compression"), learning real skills from the problem set.

This explains why the model size shouldn't be too large, to avoid giving the model too much room to cut corners. In short, Ilya wants to say that "big labeled data + low training error" is the winning formula for supervised learning, guaranteed by mathematics. This point has been confirmed both theoretically and practically. Since the deep learning revolution 12 years ago, countless successful cases have shown that as long as the training data is sufficient, neural networks can excel, at all sorts of AI tasks, from recognizing cats and dogs to machine translation.

But what about unsupervised learning? Can computers learn intelligence from a problem set without standard answers? It sounds far-fetched, but Ilya is about to explain how he managed to seek a solid mathematical foundation for unsupervised learning as well.

2. Distribution Matching: A New Approach to Unsupervised Learning

Everyone knows that machine translation was a typical win of supervised learning, in fact, the only win among various NLP tasks (such as dialogue, information extraction, sentiment analysis,  question answering, docuent understanding, etc.) prior to the large language model's era. Why? Because we have a vast amount of historical bilingual data. It's like students having workbooks with English on the left and Chinese on the right—supervised learning thrives on this setup.

But what if the teacher suddenly stops providing aligned bilingual data and only gives you English books and unrelated Chinese books, leaving you to figure out how to align and learn automatic translation? That's the problem unsupervised learning needs to solve. Ilya says unsupervised learning can also handle various language machine translations (which we've seen today with large models—specialized translation software is no longer needed), and even any input-to-output transformation tasks. What's the catch?

Ilya discovered a new approach called distribution matching. Essentially, if the English and Chinese book collections are large enough, containing various sentence structures, their linguistic regularities will be learned "without supervision". For example, the context distribution of "I/me/my" in English should correspond to "我" in Chinese; adjectives near nouns in English with semantic compatibility should have a similar pattern in Chinese, etc. This provides the basic condition for potential language alignment.

Ilya points out that if two languages' native data is sufficiently rich, the input in one language can almost uniquely determine the equivalent translation in the other language. This principle applies not only to machine translation but also to tasks like speech recognition and image recognition.

Ilya independently discovered this approach in 2015, fascinated by the underlying mathematical principle—compression theory. If we can find a method that maximally compresses both English and Chinese data, this approach will capture the common patterns of the two languages, which form the basis of translation.

So, Ilya proposes that unsupervised learning is essentially about finding the optimal data compression method. This perspective not only sounds cool but also provides a mathematical explanation for the effectiveness of unsupervised learning. Although real-world tasks are not idealized, this principle gives unsupervised learning a solid theoretical foundation, making it as convincing as supervised learning.

Next, Ilya will delve deeper into the mathematical principles behind it. Although somewhat abstract, he promises it’s full of insights. We'll see how he uses the magic of compression to explain the mysteries of unsupervised learning.

3. Ilya’s Ultimate Theory: From Conditional Modeling to Joint Modeling

This is the final and most intriguing slide of Ilya's talk, worthy of thorough analysis and contemplation. The goal of unsupervised learning is often defined as "learning the internal structure of data." Ilya suggests understanding unsupervised learning from the perspective of data compression: a good unsupervised learning algorithm should maximally compress the data, representing its content in the simplest form. This introduces the concept of Kolmogorov complexity.

The Kolmogorov complexity of a data object is the length of the shortest computer program that can fully describe this object. You can imagine this shortest program as a "compressed package" containing all the information needed to reconstruct the original data. From this perspective, the goal of unsupervised learning is to find the optimal compressed representation of the data, which is the Kolmogorov complexity.

The Kolmogorov complexity of a data object is the length of the shortest computer program that can fully describe this object. Imagine this shortest program as a "compressed package" containing all the information needed to reconstruct the original data. From this perspective, the goal of unsupervised learning is to find the optimal compressed representation of the data, which is the Kolmogorov complexity.

However, in practice, we often need to handle multiple related datasets. For instance, in machine translation, we have the source language dataset X and the target language dataset Y. We want to learn a model that can translate sentences from X to Y (or vice versa). Traditionally, this is viewed as a conditional probability problem: given X, what is the probability distribution of Y? Represented in terms of Kolmogorov complexity, this involves finding K(Y|X), the shortest description length of Y given X.

Ilya proposes a different approach. Instead of viewing X and Y as condition and result, like in supervised learning, he suggests viewing them as a whole and compressing them together within a massive model. Essentially, we seek the joint Kolmogorov complexity K(X,Y), the shortest program length that compresses both X and Y simultaneously. This approach must fully utilize the correlation between X and Y, using information in X to automatically align Y (or vice versa), much like how we use our native language knowledge to understand and remember foreign language expressions.

Ilya believes this joint compression idea is the true power of unsupervised learning. Real-world data is often interconnected, with numerous deep common patterns and regularities. If unsupervised learning can discover and utilize these regularities, it can significantly enhance learning efficiency and generalization ability. This explains the remarkable performance of large language models like GPT across various tasks: through massive unsupervised pretraining, they learn the deep regularities of the training data, and these regularities are transferable across related datasets.

Although Kolmogorov complexity is theoretically uncomputable, Ilya believes we can approximate this process using deep neural networks (like GPT). Through optimization algorithms such as gradient descent, neural networks can find the optimal compressed representation in massive data, capturing the essence of the data and its alignment patterns, even if not strictly in terms of Kolmogorov complexity.

Thus, Ilya’s theory can be seen as a new paradigm for unsupervised learning, elevating traditional independent modeling (like separate models for English and Chinese) to a unified associative modeling approach. In this paradigm, the goal of unsupervised learning is no longer just compressing individual datasets but finding the connections between them. This cross-modality learning represents an advanced form of artificial general intelligence (AGI).

Now, let’s closely examine this final slide. In it, X represents dataset 1 and Y represents dataset 2. The key point is extracting every bit of information from X (or Y) to help predict Y (or X). This is what Ilya refers to when he says training X and Y together yields the effect that unsupervised learning of X helps accomplish the task of transforming X to Y.

The crucial idea is: K(Y|X) becomes K(X, Y).

Ilya transforms the universally applicable functional AI task of "input X conditions output Y" into an approximate solving problem by jointly training X and Y without modal segmentation. This joint training approach is effectively the current multimodal unified training, abbreviated as K(X, Y).

Ilya aims to strengthen the theoretical basis, emphasizing his surprising discovery that self-learning of X has a strong predictive effect on Y.

The essence of unsupervised self-learning is that the self-learning of X is to compress X, and the self-learning of Y is to compress Y. This is straightforward because the essence of self-learning is involves only positive examples, without negative samples. Unsupervised self-learning lacks a specific task orientation; it learns language from language, images from images, music from music, and so on, continually abstracting various patterns from phenomena.

Ilya points out in the slide: conditioning on a dataset, not an example. The compression object is the dataset, not individual data points, which is crucial. This distinction separates superficial compression from content compression. Superficial compression is merely a mechanical process that does not produce intelligence. Only content compression can achieve artificial intelligence.

How do we understand the difference and connection between superficial lossless compression (e.g., digital music) and content lossless compression (e.g., Suno)? Compressing a specific song losslessly aims to ensure it can be restored to its original musical form (including noise and imperfections). This is traditional music compression, targeting individual sample, e.g., a specific song. Compressing a collection of music, whether using GPT or Diffusion, targets a group of samples, resulting in a large model like Suno.

When individual objects turn into group objects, formal compression naturally transforms into content compression. This is because, although the group comprises individuals, compressing the group is like "painting" a portrait of the group, outlining its characteristics. It may resemble an individual, but it is not a specific individual in the original data; otherwise, it would not be a model but a memory repository.

This is understandable because the purpose of large model compression is to identify the characteristics and regularities of the dataset. The text generated by GPT-4 might seem familiar; the music generated by Suno might sound familiar; the videos generated by Sora might look familiar; the images generated by MJ might seem familiar. However, they are virtual individuals "restored" based on prompts, abstracted or compressed from big data: derived from data, higher than data, mingling with data, indistinguishable from real and fake.

Given that the compression object is the entire dataset content, how do we measure its effectiveness after decompression? What is the gold standard?

This standard is each sample itself. However, this is not entirely accurate; the standard could have equivalent answers, as the same content can have various ways of expressions. The implementation method is "masking", and NTP simply masks the next token. Training involves calculating the loss for each sample, using backpropagation with gradient descent to adjust parameters continually, eventually lowering the loss in the group training of the dataset to an acceptable point, forming the large model.

This final slide and Ilya’s explanation emphasize a core point: Conditional Kolmogorov complexity K(Y|X) provides a theoretically optimal solution for unsupervised learning. K(Y|X) is defined as the length of the shortest program that produces the output dataset Y given access to the input dataset X. It represents the theoretical limit of extracting all valuable information from X to predict Y. An algorithm that can achieve K(Y|X) would be the best for predicting Y using unlabeled data X.

This can be seen as the theoretical basis for large models performing various language translations. Each language is potentially X and potentially Y. After self-learning with an huge amount of data, LLMs learn the relationships between languages, possessing the potential to translate from X to Y.

In practice, the machine translation task, like other tasks, initially involves few-shot examples in instruction-following fine-tuning to define the task, ultimately triggering the internal power of large models to translate various languages. This internal power of unsupervised learning for various tasks is the theme of his talk.

However, K(Y|X) is uncomputable in practice. Ilya proposes a feasible alternative, using joint Kolmogorov complexity K(X,Y) (joint compression of X and Y). He believes K(X,Y) can achieve the same effect as K(Y|X) in practical machine learning tasks.

Let us stop and think again: conditional modeling is now replaced by sequence modeling by Ilya. The widely known probability simplification in traditional machine learning, such as the Markov chain, has a similar effect.

Conclusion

Ilya's historic presentation at Berkeley on the theory of unsupervised learning reveals the secret behind the mainstream of self-learning large models, especially GPT. It seems that Ilya, after long contemplation, finally disclosed this "heavenly secret" in a cryptic manner at Berkeley. Although the theory and its proof appear complex, it is crucial for understanding why GPT's sequence learning method ("next token prediction") has become a universal simulator for AI tasks.

Ilya exudes a genius prophet aura, with a lonely invincibility and high-altitude isolation, blending a sense of deep realization, compassion, and the pure, focused, and idealistic earnestness of a graduate student nerd.

He claims to prefer compression but does not emphasize so-called lossless compression. He leaves room for himself and the mainstream, proposing the concept of "no regret"—though GPT may not achieve lossless or perfect compression, it theoretically proves there is no better way: GPT is the closest to lossless, "no-regret" modeling.

When Ilya officially re-emerges to establish SSI, emphasizing a single focus, a single goal, and a single product—to use technology to ensure the superintelligence brought by large models is safe for humanity—he asserts: AI will be eternal, its birth akin to the creation of heaven and earth. As Ilya passionately discusses AI's progress, he is most qualified to declare and lead the "exciting yet dangerous journey towards AGI."

References

Chinese full-length post

Ilya's presentation:

Related Links:

https://www.youtube.com/live/AKMuA_TVz3A?si=7dVfcBUv3rHBHtyT

https://situational-awareness.ai/

Ilya Sutskever:你会 bet against AI 吗?

挥刀董事会,OpenAI 如何解决致命难题

英伟达GTC的一小步,AI进化史的一大步

为什么说 Sora 是世界的模拟器?

世界语国,朋友遍天下(视频修订版)

朋友遍天下

 

可灵AI视觉大模型重现1987年北京世界语春节晚会联欢聚会的场面,当时正谈恋爱,带着领导(当时是女友)去见圈子里老朋友(右边是大百科全书的郑伯承,非常有才华的老友),背后是春节猜谜的谜语。复原的真切程度,难以置信,包括神态、微表情和动作。。

除了已经死去的语言,语言的地理分布不难确认。可世界语国(Esperantio)在哪里?世界语者(Esperantistoj)会很自豪地告诉你:nenie kaj chie (哪里都没有,可又无所不在). Esperantio estas tie kie estas Esperantistoj. (哪里有世界语者,哪里就成为世界语国。)

世界语的大同理念很像是国际歌的传奇,这让我想起我们从小就熟读的列宁语录:“一个有觉悟的工人,不管他来到哪个国家,不管命运把他抛到哪里,不管他怎样感到自己是异邦人,言语不通,举目无亲,远离祖国,——他都可以凭《国际歌》的熟悉的曲调,给自己找到同志和朋友。”世界语对于世界语者,就如上个世纪早期的《国际歌》对于工人革命者一样,成为联络同志的桥梁。正是这样,我凭着“爱斯不难读”(Esperanto)这共同的语言,从英伦而加拿大,再到美国,每到一处,总能找到同志。

世界语者总人数并不多,约三千万,但分布极广,世界上差不多每个中等以上的城市都有或大或小的世界语俱乐部。所有的世界语俱乐部对待远道而来的世界语者就象自己的亲人一样,对此我深有体会。

英国曼城是我出国留学的第一站。跟很多人一样,第一次远离故国,伴随着难以名状的痛苦,内心空荡而恍惚。百无聊赖,我于是找来电话黄页,查询 Esperanto, 果然发现有联络人,原来是一帮退休老人组成的俱乐部,每周在 Pub(酒馆) 活动一次。他们很高兴,我的加入给他们带来了新奇。

于是每个周末,他们派人来车接我送我。这是我第一次接触英国的 Pub 文化。刚开始对 Pub 不是很习惯,里面闹哄哄的,硕大的屏幕上总是播放着足球赛事,有人打台球,有人玩游戏,更多的人在那里喝着啤酒高谈阔论。英国人对 Pub 的热衷,超出我的想像,有人每天傍晚来这里泡到后半夜,海量的啤酒入肚,满面通红,谈些不知所云的话题。以酒会友,人生几何。

我们这个俱乐部通常是找一个相对安静的小角落里围坐在一起。具体谈些什么记不得了,可那种温馨的气氛给我短暂而孤寂的留英生活,留下了回味。

在英国尝到了寻找世界语“同志”的甜头,到了温哥华第二天,就打开黄页,果然联系上了一位老世界语者J,德国人,极为彬彬有礼,和蔼热情。温哥华的五年,他成为我来往最密切的忘年之交。有次我在系里讲演“世界语的机器处理”,他象个记者一样扛来他的老式摄像机, 跑前跑后,给我录像,使我们系里的教授同学亲眼见到世界语者的热情。

立委与世界语者朋友在温哥华合影(J先生居中)

温哥华世界语俱乐部当时还有一批电话公司的白人小伙子,长的都很精神,听说来了一个如假包换的中国世界语者, 都很兴奋。相约聚餐后,他们诚邀我周末跟他们一起滑雪去。我当时刚来,功课很紧,可是盛情难却,还是豁出去一天跟他们去了。这是我第一次滑雪,尽管老摔跤,感觉新鲜美好。我以前从来没有置身过这样的环境,松树白雪,笑语喧哗,各类雪衣,色彩缤纷,真是天上人间。

滑雪过后,我们来到其中一位的女朋友家吃晚饭。女主人年轻漂亮,热情爽朗,给我们煮了一锅大杂烩。她的房子在山腰上,后院对着风景无限的大峡谷。尽管天气寒冷,大家还是愿意在室外,一边喝啤酒,一边欣赏景色。在英国灰蒙蒙雨蒙蒙地度过一年,置身温哥华这样有气势的自然景观,如入仙境。餐后大家围坐一起看美国卡通《Simpsons》的录象,女主人挨着我坐,很体谅地说:你刚来,可能不懂里面的文化和幽默,我来给你做点讲解。于是她热情可掬地在我耳边嘟嘟囔囔,我根本没听明白,只是胡乱点头。她看我点头,孺子可教,更加热情有加。这是一次愉快又有点透不过气来的经历。身边一个金发美女,殷勤热情,差不多靠在我身上了,耳边是她的带有热气的喃喃细语。

以后的每个周末,我们俱乐部会面聚餐一次,总是J老先生牵头,五年下来,从不间断。这是一段非常愉快的时光。值得一提的是,在温哥华俱乐部里,我曾经巧遇多年没有联系的老友郑伯承。伯承兄是北京世界语协会秘书长,当年曾是我世界语班上的同学。他来温哥华开国际艾滋病大会并宣读论文,事先跟J老先生有联络。J老先生通知我说,下次聚会我们欢迎来自北京的朋友,我还在琢磨是谁呢,心里想,如果是北京的老世界语者,八成我早就认识了,没想到来的是伯承兄。老朋友相见分外亲切,我们欢聚一堂,相叙甚欢。

1987年春立委携夫人(当时是女友)参加世界语者聚会,与伯承兄合影

记于2004年五月28,美国水牛城

 

《朝华午拾》电子版目录

Suno:《立委:Esperanto: Al Nia Kara Lingvo(世界语之恋)》

立委两分钟:大模型时代的痛

我是数字主播小雅,今天《立委两分钟》节目的话题是大模型时代的痛。目前在上海召开的人工智能大会,也提到了这个话题。

这次不谈技术,也不科普,专门谈谈大模型时代带来的问题,主要是两点,一个是真假莫辨;另一个是被模型吃掉的工作岗位。都是沉重的话题,但都是进行时,问题只会越来越严重。

生成式AI的到来,也就意味着真假莫辨时代的到来。现如今,无论文字、音频还是视频,随着大模型的普及,深度造假(deep fake)的门槛无限低。耳听为虚,眼见也一样可能是虚。有什么信息可以相信呢?社会似乎并没有为此做好准备。

更闹心的是就业市场受到的冲击。几乎所有的白领工作,都面临大模型的挑战。AI结合机器人(所谓具身智能)的方向也在加速度发展,再过几年,蓝领的工作也面临类似的挑战。

最近几个月在美国,身边不少年轻人失业了,有好几个是伯克利data science毕业的学生。一方面AI大厂如英伟达、苹果、微软等市值不断攀升,另一方面包括大厂在内的IT裁员风潮也一波接着一波。不是 hard core engineering 的数据科学专业生,冲击很大,所以很多年轻人对大模型有怨气。这才刚刚开始。名校毕业生也一样两极分化。非常可怕的AI碾压人工的潮流,data science 是重灾区。

曾几何时,在大模型之前,全美国都有预见,说未来最大的需求是数据科学,每个企业、产品,无论大小,都对数据工作有极大的需求。于是,各大学都开始增加 ds 的 programs,校内极速扩张这个介于电脑和统计之间的学士、硕士课程,各种网上的课程和学位也满天飞。结果,大模型来了,数据分析、总结和渲染都比人做得既快又好。非常可怕的AI碾压人工的潮流,data science 是重灾区。

美国做教授的老友说,数据科学杀掉了统计,人工智能杀掉了数据科学。现在高等教育里还有一个潮流,叫micro credentials, 各种证书。大量的half baked potatoes 就会抢占工作市场,真正全方位科班出身的人反而找不到工作。这些拿了证书的技工把工作都占了,是不是对正式毕业生不公平? 学生怎样会有动力学习?各种研究生教育有什么用?

问题尖锐,但不好回答。局面只会越来越糟。人类的心理承受、社会的应对,都是慢了几拍的。但技术却滚滚向前。

大模型来了后,很多教育其实是不必要的,传统教育的知识传授已经没有多少实质意义了,因为知识是招之即来,问之即答。大模型做辅导,比多数讲师教授还强。其实,教育和学校更大的功能现在就是一个蓄水池,稳住那些无业也找不到工作的人,推迟进入职场,给人一种虚假的对未来的希望。

国内也是同样。很多本科毕业生开始读硕士,不少硕士开始读博士,不是因为不断高读在他们的人生规划中,而是无法面对毕业失业的现实,不得不用升造来逃避。但水涨船高,职场的要求越来越高,而职位却不增反减。核物理博士毕业进街道办,博士出来做辅警,汉语毕业出来扫大街,诸如此类,恶性循环。

AI 也许真应该缓行。但没人能阻挡技术的内卷外卷。人类跳不出自己的怪圈。以职业生涯(career development)作为指挥棒的社会传统意识和价值观,必须改变,但改变需要很长的时间和配套的机制,这些都看不到任何靠谱的方案和行动。

处在漩涡中心的大模型产业也卷得惨烈。上一次移动技术革命在衣食住行、娱乐通信这些刚需领域已经做到了 极致,出现了一批超级应用,美团、滴滴、微信、抖音、拼多多等。剩下的知识和艺术工作属于高端需求,写文章、作曲儿、画个画、做个视频、用个秘书,大多是锦上添花。也许有一天这些高端需求会成为刚需,但在目前很像是伪需求,因此规模化落地应用就显得格外困难。

现在被认可的真需求是什么?有企业老板说得好:你显示给我看,用了你的AI,我可以裁员多少?那才是刚需。

问题是,通用大模型可以免费或白菜价来使用,根本没有多少大模型创业的机会。而大模型的降本增效所引起的裁员已经开始。只有那些特别定制的AI,能够帮助企业进一步裁员,才有活路。

最讽刺的是,以前被尊为打工贵族的码农,也首当其冲。在横扫了 data science 工作后,cs 的毕业生中比较平庸的,也在工作被铲除的路上。美国作为IT超级大国,多少年来在培养程序员方面一直有缺口,本国毕业的cs学生填不满,只好大量留用印度、中国和其他国家的毕业生。这样的好日子,也快到头了。

我不认同一个流行观念,说以前的技术革命消灭了一些工作,也创造了新的就业,所以大可放心,这次也会如此。时代不同了,大概率不会如此。必须认清消灭的岗位比创造的岗位,会多得多。虽然美团外卖小哥、滴滴司机是上一次移动平台技术扫荡了实体店以后产生的新工作,但看看这些最低工资的工作岗位的内卷情况就知道,就连这些工作也是僧多粥少,大家都在挣扎。人工智能的 Robo Taxi 的到来,会逐步消灭滴滴司机。
生产效率的提高一定伴有大量的失业,产业机构改变,也有失业。很多这种失业人员,基本没有希望东山再起。所谓结构性失业,大批中年人,只能等退休。除非奇迹发生,年轻人的就业问题也越来越严峻。人类必须改变和面对的是,不工作是常态,UBI 必须建设。

UBI制度(Universal Basic Income,全民基本收入)必须提上日程,这是因为生产力和GPT并不因为就业人员的急剧萎缩而降低,反而得益于技术革命而在稳步或快速增长中。两极分化必须遏制,必须防止技术革命的红利为少数人独占。否则国将不国,球将不球,人类将非人类。

我是小雅,上面播报的是《立委两分钟》的有角度的沉重思考,谢谢聆听,咱们下次再见。

立委两分钟:大模型到底有没有逻辑推理能力?

我是大模型数字主播小雅,今天播报的题目是立委两分钟最新的话题《大模型到底有没有逻辑推理能力?》。

这是最近一篇论文的题目。ICML 2024:“Case-Based or Rule-Based: How Do Transformers Do the Math?” 第一作者为来自北京大学物理学院、即将加入人工智能研究院读博的胡逸。有点意思。

论文第一个结论是,llm 学不到真正的推理规则,只能通过相似案例学到有限泛化。人学到的推理规则是可以 外推的(extrapolation) , 而相似案例的有限泛化却只能做内插(intrapolation)。无法外推,所以训练集没有类似案例的区域就成了盲区。
这个实验是在 gpt2 上做的。可以理解为什么第一个实验必须在 gpt 2 上做,因为他可以控制训练数据,制造数据真空,来验证有没有逻辑推理带来的外推能力。但这种GPT2这样规模的“大”模型没有出现的能力,并不表明真正的大模型就不会“涌现”。

论文后来的实验是在比GPT2大得多的 “辣妈” 上做的,似乎得出一个相左的结论。结论是,如果模型足够大,只需要少量的任务微调,llm 就学会了接近规则推理的效果。在长整数加法上的表现,表明模型不仅会内插,在外推上也表现很好。

非常有趣的研究,实验设计也不错,但论文貌似同时发表了两个矛盾的结论。

谈谈我的看法。从序列学习的方式上看,数据驱动的模型学习是以 case based 的归纳(也叫压缩)作为起点和主干的,这个没有疑问。问题是,case based 的学习,到了一定的程度和量级的时候,是不是会非常逼近 rule-based 的学习。承认后者就是承认了大模型具有某种逻辑推理能力。大模型具有初步的逻辑推理能力这一点在大模型主流社区中本来不是问题,而是默契的共识,大模型测试的一个重要维度就是逻辑推理能力。但在更大的范围内(非主流圈子以及普罗大众),一直还是作为疑问存在的。

一个有意义的视角是看泛化中外推的理解。对于非解析的、没有对应符号规则的现象,外推本质上是不可计算的,也就是只能碰运气了。出路只有收集相关数据,把盲区带入雷达屏,化外推为内插。但是对于有解析解的高度规则化的数据分布,外推能力是泛化学习的自然期望,达不到期望就说明llm只是一个鹦鹉。达到了期望, 就说明 llm 跳过了鹦鹉的门槛,学会了某种推理规则。现在看来,头部大模型是跨越了这个门槛,继续拿鹦鹉学舌来比况大模型,彰显的是人类盲目的狂妄自大。
要摈弃削足适履的思维定式。只要模型展现出符号规则类似的推理逼近的能力,就应该承认它学会了初步的推理。更本质的,它融会贯通,对于规律现象,可以达到外推的能力。其实,小语种之间的机器翻译能力,就是外推的结果,因为训练数据中严重缺乏相关的数据。
前不久引起关注的一项关于KAN模型的研究中,KAN 的 AI for science 实验,其实已经展示了模型如何数据驱动去逼近解析解,等于是把模型学习逻辑推理的内部过程图示化了,非常生动 ,有相当的说服力。当然,KAN的实验表明对于简单的解析解,数据驱动可以逼近符号规则,但并不轻易就得出符号规则。实验中是加入了人为的剪枝等操作才得出了数据背后的符号规则。

与此对照,深度学习大佬杨立昆却坚决否认GPT有逻辑推理能力。杨立昆语录: AGI is a complete nonsense;GPT is a deadend,等等。矫枉过正反潮流,把话说死,并不是坏事。但轻信他,也可能就被带进沟里去了。

诚然,大模型序列学习的本性是在序列上再现数据的丝滑和顺畅,所以擅长语言生成类任务。序列学习不擅长逻辑推理,尤其是深度、复杂的逻辑推理过程。但完全否定大模型具有初步的逻辑推理能力却是走到了另一个极端,也不符合我们所看到的事实表现。

我是小雅,以上就是今天的立委两分钟的播报,希望对您有启发,谢谢聆听,咱们下次再见。

立委NLP频道

英烈颂

 

Ode to Madam Hu

July 1, 2024 at 1:50 PM

[verse 1]
In Suzhou's June, beneath a scorching sky,
A madman's blade flashed, evil drawing nigh.
Mother and child cried out in desperate fear,
Their screams of anguish piercing far and near.

[chorus]
With verse we mourn, our grief in words conveyed,
A hero's tribute, never to fade.

[verse 2]
Before the school bus, Madam Hu stood tall,
Her gentle hands became a shield for all.
No tiger-wrestler she, no dragon-slayer,
But love unbounded made her their savior.

[chorus]
With verse we mourn, our grief in words conveyed,
A hero's tribute, never to fade.

[verse 3]
Her blood stained red the soil of Jiangnan,
White clouds and grieving grass bore witness, wan.
Though snuffed, her candle's light forever gleams,
Like brave Feng Yuan of old, her courage beams.

[chorus]
With verse we mourn, our grief in words conveyed,
A hero's tribute, never to fade.

[verse 4]
Why must the kind so often suffer woe?
When will justice's path smooth waters show?
We question Heaven, tears fall like the rain,
In silence seek life's meaning through our pain.

[chorus]
With verse we mourn, our grief in words conveyed,
A hero's tribute, never to fade.

[verse 5]
Madam Hu's name shall echo through the years,
Half-masted flags, a nation draped in tears.
Her love, transcending life and death's divide,
One selfless act, as sun and moon abide.

[chorus]
With verse we mourn, our grief in words conveyed,
A hero's tribute, never to fade.

[verse 6]
Rest now in peace, return to native ground,
Let not your family grieve, all hearts are bound.
In old Wu Gate, by Suzhou's storied streams,
We offer flowers and wine to honor dreams.

[chorus]
With verse we mourn, our grief in words conveyed,
A hero's tribute, never to fade.

[fade]
[end]

胡氏颂
苏州六月炎炎日,
街头突现凶徒起。
匕首寒光闪不已,
母女惊呼声声急。
校车门前胡氏立,
柔弱双手化盾力。
非为搏虎斩蛟志,
只因大爱无私意。
鲜血染红江南地,
白云悲悯草木悲。
烛火虽灭光不已,
如古冯媛勇气继。
善良何故遭祸际?
正义何曾得善治?
追问苍天泪如洗,
沉默中觅人生理。
胡氏芳名世代记,
旗帜半垂国人泣。
大爱无疆超生死,
一念救人照月日。
愿君安息归故里,
家人莫悲天下悯。
吴门故地姑苏际,
香花清酒祭英魂。
勇者浩气长存世,
感人义举青史立。
愿以诗文寄哀思,
颂扬英烈万古记。

 

立委NLP频道

短视频:大模型奥秘

llm secret

我是大模型数字主播,今天播报的题目是《大模型真经》。

尼克语录,日诵三回:“首先Church Turing thesis,其次Turing machine求逆就是所有学习,再次,所有学习可归约到ntp (next token prediction)。”

没看明白的, 再诵三回。

此乃大模型真经也!

再诵三回仍不明白的,且听分解如下。

简单来说就是,一切可计算的数据,都有个图灵自动机计算它。学习(或 模型训练)就是为制造这台自动机。而所有的学习都可以用数据驱动的GPT完成。

这就是大模型成为万能任务解锁器,成为通向AGI的不泄天机。

但天机还是泄漏了,也许上帝正好打了个瞌睡。

朋友反馈说,还是没看明白。好,咱们用老妪能解的说法试试。

首先,什么叫“可计算的数据”,就是“有规律的现象”:
有规律的现象数字化以后,就是可计算的数据。

哪些现象是有规律的呢?

其实,理论大咖们相信宇宙大部分现象是没有规律的随机现象,茫茫星际,有规律属于罕见。宇宙的本真也许就是混沌一片。

然而人类出现了,演化出文明和智能。人类的现象,尤其是智能工作的结果呈现,例如文字书籍、录音录像、艺术创造,这些人类创造物都是有规律的。例如,语言文字里面有文法和用法。绘画艺术里面有技法和风格。

把这些语言文字、音频视频等多模态的数据收集起来,灌进大模型的熔炉去炼,结果就是我们现在看到的几乎无所不能的主流大模型,以OpenAI的GPT系列为代表。

我是科技数字主播,咱们下次再见。

 

 

 

立委NLP频道

Nick tracing the AI history for theoretical foundation and sources (English)

6.99 10/15 bnQ:/ [email protected] 6月26日-English https://v.douyin.com/i6MbUvKH/ 复制此链接,打开Dou音搜索,直接观看视频!
Click this link: https://v.douyin.com/i6MbUvKH/

 

I am AI Xiao Fan, Nick's secretary, and today I'm reporting on Nick's latest lecture "Solomonoff: The Prophet of Large Language Models".

Nick needs no introduction. Besides his many roles as an entrepreneur, investor, scholar, and philosopher, he is best known for his bestselling book "A Brief History of Artificial Intelligence", which became a sensation, sold out quickly, won numerous awards, and became a legend in China's AI publishing world. We all boast about getting his autographed copies.

The following is a concise and accessible explanation of his lecture.

Let's get to know this mathematical genius with a Santa Claus-like white beard - Ray Solomonoff! Born in 1926 and passed away in 2009, this mathematical and physics double major who "mixed" his degree at the University of Chicago was no ordinary academic overachiever. He was a pioneer of independent research, using mathematical formulas to predict the future, even more impressive than fortune tellers!

Welcome to the 'old child' battle in the scientific world! On the left is Wiener, the 'godfather' of cybernetics. In 1948, he and Shannon simultaneously published groundbreaking papers, but with very different viewpoints! Wiener said: 'Control is the way', while others became infatuated with the little "demon" called 'information'. Shannon and McCarthy were like-minded, both not optimistic about Wiener's cybernetics. McCarthy even played a word game, turning 'Automata' into 'AI', ushering in a new era of artificial intelligence!

Now let's look at the 'prequel' of the AI world! Before the AI feast of the Dartmouth Conference, the big shot McCarthy was secretly writing the 'script'! His article "The inversion of functions defined by Turing machines" wasn't about how to use Turing machines backwards. This 'heavenly book' was actually discussing how to design a super problem-solving machine. McCarthy's imagined divine machine could solve all clearly defined intellectual problems. Isn't this the prototype of AI?

At the Dartmouth Conference, McCarthy and Solomonoff, these two 'mathematical knights', engaged in a fierce 'battle of ideas'! The topic? It was McCarthy's 'heavenly book'. The two hit it off and discovered an earth-shattering secret: the inverse problem of Turing machines is actually a learning problem! This discovery tightly bound AI and machine learning together! From then on, AI was no longer just about computation, but took a big step towards 'learning'. At this moment, the future of AI was completely rewritten!

"Let's look at the 'brainstorming' moments of two 'mad scientists'! First is the French mathematician Borel, who conducted a logical experiment, imagining a group of monkeys randomly hitting typewriters, eventually producing the complete works of Shakespeare! Isn't this the infinite monkey theorem?
On the other side, the Argentine literary giant Borges conceived a 'perfect library' in his short story, containing all possible combinations of books.
These two ideas are simply the prophets of AI and big data! Borel and Borges, one using mathematics, the other literature, were both imagining the sequential possibilities of information."

"At the Dartmouth Conference, Solomonoff, like a magician, pulled out a mysterious typescript 'Inductive Inference Machine' from his hat. This move captivated everyone! Scientists who were originally obsessed with neural networks all 'defected' and embraced symbolism. But look at this dramatic twist! Years later, it was the 'abandoned' neural networks that truly realized Solomonoff's induction! This is like a fairy tale in the tech world - Cinderella finally put on her glass slipper and became the star of the AI ball!
Solomonoff's idea was like a seed planted, eventually blossoming in unexpected places."

"Let's look at the 'roller coaster' history of the AI world! Connectionism, once an 'abandoned baby', is now the 'star' of the AI world!
Imagine this as a long relay race. At the start, there was the perceptron inspired by neurons, fearless like a newborn calf. But it soon met its 'Waterloo' with the so-called XOR problem of single-layer neural networks, and was 'banished' by the big shots.
However, in the 1980s, multi-layer neural networks and the BP algorithm emerged out of nowhere, injecting new life into connectionism. Now, deep learning is at its peak, and connectionism has made a 'dramatic comeback', becoming the 'top flow' in the AI world.

"Let's look at Solomonoff's 'magic moment' in 1960!
The first magic, minimum description, refers to compressing data in the most concise way. This idea later developed into 'Kolmogorov complexity', that is, K-complexity, becoming the core of large model theory.
The second magic, prior probability: the initial estimate of the possibility of an event occurring without specific information.
These two concepts seem simple, but contain profound insights. They provide a whole new perspective for us to understand information, complexity and learning, directly influencing the later development of artificial intelligence and machine learning"

In 1961, AI guru Minsky wrote an important article mentioning concepts such as machine theorem proving, neural networks, machine learning, reinforcement learning, etc., which was simply the secret manual of the AI world! He cited 95 references, 4 of which were Solomonoff's, showing his high regard for Solomonoff. Interestingly, it was neural networks that first realized Solomonoff Induction, which is an unexpected twist!

In 1964, Solomonoff published a groundbreaking paper titled "A Formal Theory of Inductive Inference". This paper can be considered the "secret manual" of the AI field, detailing how to describe inductive reasoning using mathematical language. Simply put, it's about learning patterns from data to predict the future! This paper is Solomonoff's "masterpiece" on inductive reasoning, establishing his status in the machine learning field.
The second part of Solomonoff's paper gives examples of applying the formal theory of inductive inference to different problems. One of these examples is grammar discovery, that is, how to learn the grammatical rules of a language from observed language data. This example, in today's view, is the problem of language learning, i.e., how machines learn language like humans do. Solomonoff also discussed a deeper question in the paper: Is language equivalent to thought? This question still doesn't have a clear answer today, but Solomonoff's research provided us with a new perspective to think about this question.

Solomonoff developed a strong interest in how scientists discover things and tried to find a universal method of scientific discovery. This interest led him to start researching inductive reasoning and eventually propose the concept of algorithmic probability.
In his academic career, Solomonoff applied inductive reasoning to fields such as language learning, achieving important results.

Soviet mathematician Andrey Kolmogorov is known as the "universal mathematician". In the field of computer science, he mainly has two major contributions:
Kolmogorov Superposition Theorem (K-A-N): This theorem is related to the famous Hilbert's 13th problem, involving function representation and approximation.
K-complexity: This is a method of measuring information complexity. It defines the complexity of an object as the length of the shortest program that can generate that object.
In addition, Kolmogorov had unique insights into cybernetics and information theory. He believed that cybernetics lacked inherent unity, but expressed agreement with information theory. This view is consistent with those of Shannon, McCarthy, and others.

Kolmogorov thought that information theory was like a hodgepodge, with three different approaches:
Counting School: Like rolling dice, looking at how many times a certain number appears.
Building Blocks School: Focusing on the number of building blocks and how to combine them.
Programming School: Viewing information as a program, with shorter programs being simpler.
K-complexity is the representative work of the "Programming School". Simply put, it measures how complex something is by how short a program is needed to describe it.
Interestingly, K-complexity and Solomonoff induction are actually talking about the same thing. Solomonoff induction believes that simpler things are more likely to occur.

Chaitin was a prodigy, publishing his first paper in IEEE Transactions on Electronic Computers at the age of 18. At 19, he independently rediscovered the ideas of Solomonoff and Kolmogorov in a paper published in JACM.
Starting from Berry's paradox, Chaitin believed that naming an integer is equivalent to writing a program that can output this integer. Most integers can only be named by directly printing themselves, with no more concise representation method. These integers are viewed as "random" under the framework of Kolmogorov complexity because their complexity is comparable to their length. Chaitin's view is consistent with Kolmogorov's idea, both emphasizing that most objects (or integers) are incompressible, i.e., their complexity is comparable to their length. This means they have no simpler representation method and cannot be concisely explained.
This inexplicability or randomness is ubiquitous in nature. For example, most DNA sequences, physical constants, and natural phenomena have no obvious patterns to follow and cannot be explained by simple formulas or theories. On the contrary, explicability (i.e., phenomena that can be described or explained in a concise way) only appears occasionally.
Leonid Levin proved two theorems in a two-page paper published in 1972:
Theorem 1: NP-completeness, i.e., the Cook-Levin theorem, which made an important contribution to the development of computational complexity theory.
Theorem 2: A generalization of Kolmogorov complexity.

Charles Bennett proposed the concept of logical depth, which considers the running time of the shortest program needed to generate an object. The parameters of large language models can be seen as the amount of information stored internally in the model. Therefore, it is reasonable to compare model parameters to K-complexity. It is also reasonable to compare the inference time of large language models to logical depth.
Ming Li is a distinguished professor at the University of Waterloo who has made outstanding contributions in the fields of information theory and bioinformatics. He extended K-complexity from a single sequence to two sequences, which can measure not only the information within a single sequence but also the information between two sequences. This is of great significance for universal large models to define universal tasks and complete various tasks through unsupervised learning. His book "An Introduction to Kolmogorov Complexity and Its Applications", co-authored with Paul Vitanyi, is considered a classic in the field and has had a profound impact on the development of information science.
Marcus Hutter is a computer scientist with a background in physics. He proposed the AIXI universal artificial intelligence framework and believes that language modeling is essentially compression. He applied Solomonoff induction to explain agents and reinforcement learning, believing that the learning process is a compression process, and is dedicated to researching universal artificial intelligence.
In his Berkeley lecture, Ilya, the former soul figure of OpenAI, revealed the connection between supervised learning and unsupervised or self-supervised learning. Ilya claimed that he independently came up with the idea in 2016 that all supervised learning can be reduced to self-supervised learning, tracing back to compression theory based on K-complexity. Ilya firmly believes that simple autoregressive GPT models can demonstrate super intelligence on super large data.
Let's review the timeline of model development: The deep neural Transformer architecture was proposed in June 2017, and the BERT model was proposed in October 2018. OpenAI's GPT series models started from June 2018, successively launching GPT, GPT2, and GPT3, now up to GPT4, becoming the industry mainstream.
To summarize, the first step of Solomonoff induction is to collect observational data. The second step is to form hypotheses to explain the data: hypotheses can be a Turing machine or a data-driven large model. The third step is experimental verification. If the data falsifies, return to step 2 to form new hypotheses.
Large models follow Solomonoff induction's approach to train models and their inferential applications.

Looking back at the entire history, perhaps it's not that theory lagged behind practice, but that it was too far ahead.

I am Xiao Fan, Nick's digital secretary. Thank you for following Nick's journey to explore the theoretical origins of large models and the historical changes in AI. We'll meet again.

 

立委NLP频道

不可错过,奇妙元小范介绍尼克的最新演讲,探寻大模型的理论基础与渊源(中文版,英文字幕)

4.35 05/03 CHI:/ [email protected] 尼克讲解大模型的理论渊源与奥秘 https://v.douyin.com/i6Mq91wS/ 复制此链接,打开Dou音搜索,直接观看视频!

 

 

我是AI小范,尼克的秘书,今天播报解说尼克最新的讲演《所罗门诺夫:大语言模型的先知》。

尼克就不用介绍了,除了他的企业家、投资人、学者和哲学家的许多角色,他最知名的著述是其畅销书《人工智能简史》,风靡一时,洛阳纸贵,拿奖拿到手软,成为中国AI出版界的传奇。我们都为得到他的亲笔签名版而炫耀。

下面是对他讲演的一个深入浅出的解说。

让我们来认识一下这位留着圣诞老人般白胡子的数学怪才——Ray Solomonoff!1926年出生,2009年离世,这位在芝加哥大学混学位的数学物理双料高手,可不是普通的学霸。他是独立研究的先驱,用数学公式预测未来,比算命先生还厉害!

欢迎来到科学界的'老顽童'大战!左边这位是维纳,控制论的'教父'。1948年,他和香农同时发表重磅论文,但观点大不同!维纳说:'控制才是王道',而其他人却迷恋上了'信息'这个小妖精。香农和麦卡锡倒是志同道合,都不看好维纳的控制论。麦卡锡更是玩了个文字游戏,把'Automata'变成了'AI',开启了人工智能新纪元!

现在我们来看看AI界的'前传'!在达特茅斯会议这场AI盛宴前,麦卡锡大佬就在偷偷摸摸写'剧本'啦!他的文章《The inversion of functions defined by Turing machines》可不是在讲怎么把图灵机倒过来用。这篇'天书'其实在讨论如何设计一台超级解题机器。麦卡锡想象中的这台神机,能解决所有明确定义的智力问题。这不就是AI的雏形吗?"

达特茅斯会议上,麦卡锡和所罗门诺夫这两位'数学侠'展开了一场激烈的'思想格斗'!话题?就是麦卡锡那篇'天书'。他们俩一拍即合,发现了个惊天秘密:图灵机求逆问题其实就是学习问题!这个发现把AI和机器学习紧紧绑在了一起!从此,AI不再是单纯的计算,而是朝着'学习'迈出了一大步。这一刻,AI的未来被彻底改写了!

"让我们来看看两位'疯狂科学家'的脑洞大开时刻!首先是法国数学家博雷尔,他做了个逻辑实验,想象一群猴子在打字机上随机敲击,最终能打出莎士比亚全集!这不就是无限猴子定理吗?
另一边,阿根廷文豪博尔赫斯在小说里构思了个'完美图书馆',里面包含了所有可能的书籍组合。
这两个想法简直就是AI和大数据的先知啊!博雷尔和博尔赫斯,一个用数学,一个用文学,都在畅想信息的序列可能性。"
"达特茅斯会议上,所罗门诺夫像个魔术师,从帽子里拿出了一份神秘打字稿《归纳推理机器》。这一手可把大家迷住了!原本痴迷神经网络的科学家们纷纷'叛变',投入了符号主义的怀抱。但是,瞧瞧这戏剧性的转折!多年后,真正实现所罗门诺夫归纳的,竟然是被'抛弃'的神经网络!这简直就像是科技界的童话故事——灰姑娘终于穿上了水晶鞋,成为了AI舞会的主角!
所罗门诺夫的想法,就像是种下的一颗种子,最终在意想不到的地方开花结果。"

"让我们来看看AI界的'过山车'历史!连接主义,这个曾经的'弃婴',现在可是AI界的'明星'啦!
想象一下,这是一场漫长的接力赛。起跑时,是受神经元启发的感知器,像个初生牛犊不怕虎。但很快就遇到了所谓单层神经的XOR的'滑铁卢',被大佬们打入冷宫'。
但是,上世纪80年代,多层神经和BP算法横空出世,给连接主义注入了新的生命力。如今,深度学习更是如日中天,连接主义'咸鱼翻身',成为了AI界的'顶流'。

"让我们来看看1960年所罗门诺夫的'魔法时刻'!
第一个魔法,最小描述,这是指用最简洁的方式压缩数据。这一思想后来发展成为了'柯尔莫戈罗夫复杂度',即,K氏复杂度,成为大模型理论的核心。
第二个魔法,先验概率:在没有特定信息的情况下,对事件发生可能性的初始估计。
这两个概念看似简单,却蕴含深刻的洞见。它们为我们理解信息、复杂性和学习提供了全新的视角,直接影响了后来人工智能和机器学习的发展方向"

1961年,人工智能大神Minsky写了一篇重要文章,提到了机器定理证明、神经网络、机器学习、强化学习等概念,简直是AI界的武林秘籍!他引用了95篇参考文献,其中4篇是所罗门诺夫的,可见他对所罗门诺夫的重视。有趣的是,最终是神经网络率先实现了所罗门诺夫Induction,这真是个令人意外的转折!

1964年,所罗门诺夫发表了一篇惊世骇俗的论文,名为《归纳推理的形式理论》。这篇论文堪称 AI 领域的“武林秘籍”,详细介绍了如何用数学语言来描述归纳推理。简单来说,就是从数据中学习规律,预测未来!这篇论文可是 所罗门诺夫归纳推理的“扛鼎之作”,奠定了他在机器学习领域的江湖地位。
Solomonoff 的这篇论文第二部分给出了归纳推理的形式理论在不同问题上的应用示例。其中一个例子就是语法发现,即如何从观测到的语言数据中学习语言的语法规则。这个例子在今天看来,就是语言学习的问题,即机器如何像人类一样学习语言。所罗门诺夫 在论文中还探讨了一个更深层次的问题:语言是否等于思维?这个问题至今仍然没有明确的答案,但所罗门诺夫的研究为我们思考这个问题提供了一个新的视角。

所罗门诺夫对科学家如何发现事物产生浓厚的兴趣,并试图找到一种通用的科学发现方法。这个兴趣促使他开始研究归纳推理,并最终提出了算法概率的概念。
在学术生涯中,所罗门诺夫将归纳推理应用于语言学习等领域,取得了重要的成果。

苏联数学家安德烈·柯尔莫哥洛夫被誉为“万能数学家”。在计算机科学领域,他主要有两大贡献:
Kolmogorov K氏叠加定理 (K-A-N): 该定理与著名的希尔伯特第13问题有关,涉及到函数的表示与逼近。
K氏复杂度: 这是一种衡量信息复杂性的方法。它将一个对象的复杂度定义为生成该对象的最短程序的长度。
此外,柯尔莫哥洛夫对控制论和信息论也有独特的见解。他认为控制论缺乏内在的统一性,但对信息论表示认同。这种观点与香农、麦卡锡等人的看法一致。

柯尔莫哥洛夫认为,信息论就像个大杂烩,有三种不同的做法:
数数派: 就像掷骰子,看某个点数出现的次数。
搭积木派: 关注积木块的数量和怎么组合。
写程序派: 把信息看成程序,越短的程序越简单。
K氏复杂性就是“写程序派”的代表作。简单来说,它就是衡量一个东西有多复杂,需要多短的程序才能把它描述出来。
有趣的是,K氏复杂性和所罗门诺夫归纳其实说的是一回事。所罗门诺夫归纳认为,简单的东西更容易出现。

蔡廷年少有为,18岁就在IEEE Transactions on Electronic Computers上发表了第一篇论文。19岁时,他在JACM上发表的论文独立地重新发现了所罗门诺夫和柯尔莫哥洛夫的思想。
蔡廷从贝里悖论出发,他认为,命名一个整数就相当于写出一个能输出这个整数的程序。大多数整数只能通过直接打印自身的方式来命名,没有更简洁的表示方法。这些整数在柯尔莫哥洛夫复杂性的框架下被视为“随机的”,因为它们的复杂性与它们的长度相当。蔡廷的观点与柯尔莫哥洛夫的思想一致,都强调了大多数对象(或整数)是不可压缩的,即它们的复杂性与它们的长度相当。这意味着它们没有更简单的表示方法,无法简洁的解释。
这种不可解释性或随机性在自然界中普遍存在。例如,大多数DNA序列、物理常数和自然现象没有明显的规律可循,无法用简单的公式或理论来解释。相反,可解释性(即可以用简洁的方式描述或解释的现象)只是偶然出现的。
列昂尼德·列文在1972年发表的一篇两页纸的论文中证明了两个定理:
定理1: 即NP-完全性,即Cook-Levin定理,为计算复杂性理论的发展做出了重要贡献。
定理2: 是柯尔莫哥洛夫复杂性的一个推广。

Charles Bennett提出了逻辑深度的概念,它考虑了生成一个对象所需的最短程序的运行时间。大语言模型的参数可以看作是模型内部存储的信息量。因此,将模型参数比作柯氏复杂度是合理的。大语言模型的推理时间比作逻辑深度也是合理的。
李明是滑铁卢大学的杰出教授,在信息论和生物信息学领域做出了卓越贡献。他将K氏复杂性从单个序列扩展到两个序列,不仅可以测量单个序列内的信息,还可以测量两个序列之间的信息,这对通用大模型定义万能任务及其非监督学习完成各种任务意义重大。他与Paul Vitanyi合著的《An Introduction to Kolmogorov Complexity and Its Applications》被认为是该领域的经典著作,对信息科学的发展产生了深远影响。
Marcus Hutter是一位物理学家出身的计算机科学家,他提出了AIXI通用人工智能框架,并认为语言建模本质上就是压缩。他将所罗门诺夫归纳用于解释智能体和强化学习,认为学习过程就是压缩过程,并致力于研究通用人工智能。
Open AI 前灵魂人物伊利亚在伯克利演讲中,揭示监督学习与非监督或曰自监督学习的联系。伊利亚声称他在2016年独立想到了所有监督学习可以被归约为自监督学习的观点,并追溯到K氏复杂度为基础的压缩理论。伊利亚笃信简单的自回归GPT模型可以在超大数据展现超级智能。
回顾一下模型发展的时间线:深度神经Transformer架构于2017年6月提出,BERT模型于2018年10月提出。OpenAI的GPT系列模型从2018年6月开始,陆续推出了GPT、GPT2和GPT3,现在到了GPT4,成为业界主流。
总结一下,所罗门诺夫归纳第一步是收集观察数据。第二步形成假设解释数据: 假设可以是一个图灵机或一个数据驱动的大模型。第三步进行实验验证。如果数据证伪,则返回步骤2形成新的假设。
大模型遵循的是所罗门诺夫归纳的路线训练模型及其推理应用。

回顾整个历史,也许不是理论落后于实践,而是太超前了。

我是小范,尼克的数字秘书,谢谢您一路跟随尼克探寻大模型理论渊源以及AI历史风云的过程。咱们后会有期。

 

立委NLP频道

觉得可以留个记录

2023-2024年关于大模型以及AIGC的演讲和采访汇总

2023-03-05:源码资本AIGC“尖峰系列”丨李维博士:人类语言“通天塔”建成,ChatGPT的辉煌与挑战

2023/05/09: 腾讯科技,对话产业专家:“百模大战”愈演愈烈,地球上算力不够支撑10亿人使用

2023/07/16: 轩辕之学讲座,特邀讲座《大模型的历史、现状和挑战

2023/06/28: 中国证券报报道 360 GPT 联盟特邀演讲:李维:避免大模型行业同质化竞争 深耕领域阶段有很多“细活”可做

2023/07/21: ArchSummit 全球架构师峰会主题演讲《从架构师视角谈如何迎接/拥抱大模型及其新生态》(https://blog.csdn.net/EDDYCJY/article/details/131929057

2023/07/28: Chinajoy,中国音像与数字出版协会和上海汉威信恒展览有限公司主办,特邀演讲《AIGC时代,数字人引领内容创作新范式

2023/08/06: 亚马逊云科技和真格基金主办的闭 “机遇与挑战——生成式AI的发展趋势和落地应用”,主题演讲《AI大模型落地的前景和痛点》

2023/08/23: 江苏师范大学主办 “大语言模型离人类语言有多远”学术研讨会特邀演讲《大模型的本性及其落地痛点》

2023/10/20: 第十一届科博会,特邀主题演讲《AI和大模型趋势》

2024年3月18:英伟达 GTC2024全球开发者大会 与英伟达中国团队合作,讲演《重塑数字人互动潜能》

2024/04/22: 第十四届北京国际电影节光影未来,演讲《捕捉未来的魔术师》

2024/04/22 九派财经专访出门问问副总裁李维:两年内大模型的商业化爆发一定会发生

2023/11/11: 香蕉论坛,“百“模”大战对智能座舱发展意义与实践”,2023/12/05 汽车商业评论,《​李维:大模型技术的过去、现在与未来

2023/12/05 汽车商业评论,《​李维:大模型技术的过去、现在与未来

2024/04/22:九派财经专访出门问问副总裁李维:两年内大模型的商业化爆发一定会发生

2024/05/24: 九派财经专访出门问问副总裁李维:找产品方向是更大痛点,跑通场景是第一位

《飞哥说AI》深度系列:

2023/12/09: 飞哥说AI,李维 高佳:OpenAI狂飙生态? 领域壁垒仍是大山

2023/12/09: 飞哥说AI,李志飞 高佳 李维:关于 Google Gemini 的八点启示

2024/02/19: 飞哥说AI,李志飞 李维 高佳:为什么说 Sora 是世界的模拟器?

2024/02/29: 飞哥说AI,李志飞 李维 高佳:万字长文解构中国如何复刻 Sora:模型架构、参数规模、数据规模、训练成本

2024/03/05: 飞哥说AI,李志飞 李维:Claude 3 追平或超越 GPT-4,能归纳15万单词

2024/04/01: 飞哥说AI,李维 王艺 高佳:Suno 将如何进化?

2024/04/12: 飞哥说AI,高佳 李维 王艺「产模结合」也许是AI公司更好的出路?

2024/06/04: 飞哥说AI,李维 高佳:KAN会引起大模型的范式转变吗?

 

立委NLP频道

《谈两种复杂度》短视频科普

我是AIGC频道数字主播小雅,今天播报的立委两分钟的题目是“两种复杂度”。
我们要区分两种"复杂度"。一种是 Kolmogorov 复杂度(K氏复杂度),它指的是描述一个对象所需的最短指令的长度。另一种是模型复杂度,它指的是机器学习模型本身的参数数量或者灵活性(模型的大小)。
当我们说数据的最简洁表示就是 K氏复杂度时,我们指的是第一种复杂度。K氏复杂度衡量的是数据的内在信息量(类似于以前所说的深层结构、语义表示),并非数据的表面形式。比如,一个看似随机的字符串,如果它实际上是由一个简单的程序生成的,那么它的 K氏复杂度就很低。换句话说,K氏复杂度反映的是数据的规律性和可压缩性。
当一个机器学习模型的参数过多,灵活性过高时,它可能会过度拟合训练数据中的噪声,导致在新数据上的泛化能力下降。这时我们说模型过于"复杂",需要通过缩小模型的规模或使用正则化等技术来限制其复杂度。
一个好的无监督学习模型应该能学习到接近数据的 K氏复杂度的表示(所谓无损压缩的理想)。也就是说,模型应该能抓住数据的内在规律,用尽量简洁的方式来表示数据,而不是简单地记住数据的表面形式。
这一点为什么重要?
换句话说,不懈追求无损压缩的理想为什么是获得智能的关键?
信息的无损压缩(注意不是指传统的表层模态形式的无损压缩,例如音乐的无损压缩只是形式的无损压缩,与这里讲的信息内容的无损压缩不同)以达到K氏复杂度为天花板。换句话说,任何数据对象的K氏复杂度等于是该对象的深层语义。在深层语义的层面,不同数据的相同的语义就可以自然对齐,从而帮助实现不同数据的相互转换。这是对作为压缩器的大模型能够完成任何端到端AI任务(即逼近任何函数)的信息论意义上的解释。
形式之间的转换作为智能任务,无论是机器翻译,还是文生图,图生图,文生音乐,文生视频,图生视频,看图说话,等等,都有一个预设。那就是输入和输出具有类似的意义,即有可以相通或对齐的K氏复杂度。
完全不相干的形式无法转换和映射,也就不会具有相同的K氏复杂度。例如心电图与交响乐,就不兼容。它们各自的K氏复杂度对于对方等同于随机噪音,没有 shared 信息可以作为映射的依据。当我们说大模型像个魔术师,可以把任何形式转换成任何其他形式。其实里面蕴含了一个前提,就是转换的形式之间有可以对齐或意义兼容的 shared 深层结构,在意义空间它们是可以相互关联的。这个预设通常不需要特别明示,因为这是不言而喻的,而且在实际的AI应用中,人类想要模型帮助达成的某个智能任务,会自然而然遵循这个预设。否则,那个胡乱转换的智能任务指令就是一个不可理解的目标。人类自己都不理解目标是什么的话,也就谈不上是个正经的“任务”,也无法给出达成目标的任何黄金标准。
当通用大模型把 input pattern 和 output pattern 都能压缩得很好的话,二者在意义空间的对应关系就剩下一层窗户纸了。大模型在多语言机器翻译上的成功是上述解说的最典型的显现。
对于一个数据集C做压缩,目标当然是C的K氏复杂度,模型F的复杂度必须恰到好处,F不能太大,也不能太小。太大,譬如 模型规模大于数据集,就 overfitting,就无法逼近K氏复杂度。太小,模型“吃不下”数据,也不能做到好的压缩。scaling law 虽然是个经验法则,但理论规定了大模型与大数据之间适配边界,具体的适配比例,决定于实际效果的实验。
我是小雅,每次两分钟,分享立委先生对于AI大模型的有角度的思考。谢谢聆听,咱们下次再见。

 

【相关】

 

《介绍监督学习的数学原理》短视频科普

9.46 [email protected] 09/17 mQK:/ 科普 # aigc https://v.douyin.com/ijouUSBq/ 复制此链接,打开Dou音搜索,直接观看视频!

https://v.douyin.com/ijouUSBq/

我是AI大模型频道小雅,今天播报立委先生的短评《介绍监督学习的数学原理》。
Open AI 前灵魂人物伊利亚曾经在伯克利给了一个 talk。这是注定载入史册的一次演讲,阐释非监督学习的GPT突破原理。这个演讲的开头部分讲的是监督学习(supervised learning)的数学原理。咱们今天先从这个比较容易的知识点开始讲起。
说的是数据大模型大就是模型强的道理。暴力美学,美就美在,大即是美。插一句,这与唐代的审美观貌似相若:肥即是美?嘿嘿...
说监督学习的数学保证,意思是,足够大的数据,足够多的参数,就会有足够高的精度。伊利亚为了怕“过拟合”(overfitting),特别强调了 数据规模必须大于模型规模这个业内常识。“过拟合”的时候,训练误差虽然小,但测试误差降不下来。如果要让训练误差反映模型的实际能力(即测试误差),这是一个必要条件。否则,模型根本就不用做真正的抽象或“压缩”,不去找规律,它就全部死记硬背了。我们知道死记硬背的模型,缺乏泛化能力。它在训练集上可以考高分,但到了盲测的时候就抓瞎了,模型的质量得不到保证。所以给出的原理公式里面必然有模型大小这个变量,事实上这是一个决定性因素。数据量的增长会导致偏差指数级下降,保障模型质量的持续上升。
伊利亚说监督学习没有问题,有理论上的保证。所谓理论就是那个统计学习的Hoeffding 不等式,说的是,随着训练样本数量增加,测试偏差超过一定阈值的概率会呈指数级下降。其主要含义是:当训练数据足够大,训练误差足够低,且训练数据远大于模型规模的时候,模型的泛化能力和精度是有保证的,这就是监督学习能够起作用的理论基础。
这个其实我们早就知道了,第一,宏观上和理论上,“万能近似定理”(UUniversal Approaximation Theorem)早已论证了深层神经网络可以逼近任意函数。这其实是 scaling law 的源泉。第二,当代 AI 历史上,12年前的深度学习革命就开始证明,只要有足够带标数据,神经网络就可以让“老母鸡变鸭”,或做任何其他变换。
但具体说来,为什么大模型一直强调“大”呢(模型大就是模型参数多),但现在似乎又在强调不能太大?那是因为数据超大的时候,模型不大,就消化不了。前大模型时代的传统机器学习就有这个问题。传统机器学习算法总体而论复杂度不够,模型过小,你给再多的数据也没用,它肚子小,吃不下,质量很容易达到一个点就基本 stuck there 了。深层的神经网络在原理上解决了这个问题,你总可以根据你的数据来决定需要多大多深的网络来适配才最佳。这是大模型之所以要大的一面。但也不能太大,要与数据规模拉开距离。换句话说,大数据必须大于大模型,这才是 scaling 的有效适配。
ChatGPT 出来的时候,模型参数与模型数据是一个量级的,其实现在看来是某种过拟合了。GPT4 超过 ChatGPT3.5 的一个很大的原因,可能就是因为调整了这个比例。
我是小雅,上面分享的是立委先生每次几分钟关于大模型的有角度的思考,希望对您有启发,咱们下次再见。

【相关】

《谈谈端到端和大模型》短视频科普

我是科技频道小雅,今天播报立委先生的短评《谈谈端到端》。

端到端(end to end)的扬名天下是伴随深度学习革命而来。那是十多年前的时候,那时候大模型还没有诞生,但深层神经网络已经革命了,表现在解决了端到端的监督学习(supervised learning)的问题。这一波监督学习的大革命横扫了图像、语音与机器翻译。但在多数NLP任务上遭遇了瓶颈。
就是说,可以把深度学习看成一个黑盒子,任何AI智能定义都可以在这个盒子的两端定义输入和输出。解决了监督学习指的是,只要有足够大的输入和输出的对齐数据,即所谓带标数据(labeled data),深层神经网络就可以搞定它。这与前深度学习时代不同,以前的传统机器学习算法是搞不定的,即便有海量的对齐数据,因为学到了某点就 stuck there,不像深层神经那样的大肚子有消化力。深度学习可以搞定,背后有著名的万能近似定理,意思是,只要数据足够大,模型参数足够多,多层神经网络的精度理论上可以持续提升。

但是在大模型(LLM,Large Language Model)出现之前,只有监督学习搞定了端到端定义的任何AI任务,而缺乏带标数据的非监督学习还是搞不定。多数NLP任务,例如信息抽取、问答等都属于此类,这就是所谓带标数据的NLP知识瓶颈。

基于transformer架构的大模型的出现,尤其是 GPT的 next token prediction 的自回归学习突破了这个知识瓶颈。从此,非监督学习也可以搞定端到端定义的AI任务了,横扫了NLP各个任务,包括机器翻译、文摘、问答、聊天、辅助写作等。近几个月,多模态数据加入了大模型以后,同样的架构和原理也基本搞定了文生图、图生图、文生音乐、看图说话、文生视频、图生视频,等等等等。

基本上,对于一个多模态大一统的大模型,只要有足够的多模态的原始数据,任何模态理论上可以转换成任何其他模态。端到端于是成为大模型在大数据上完成各种AI的一个最典型的任务定义方式。

这是怎么搞定的呢?背后的理论是 K氏复杂性定理(Komogorov complexity),又叫所罗门诺夫归纳法(Solomonoff induction),伊利亚的伯克利讲演通篇就是在说这个道理:非监督学习,与监督学习一样,可以搞定任何端到端的智能任务。如果有足够的(不需要标注的)原生数据(raw corpora)的话,GPT 可以自动去做对齐,最多是加上一层薄薄的少样本(few shots)的模型微调来触发这种对齐。

总结一下,端到端是定义AI任务的万能范式。深层神经网络在足够大的数据支持下可以搞定端到端的任务。具体说,监督学习需要足够的带标数据支持,非监督学习需要足够的原生数据支持。大模型属于非监督学习搞定AI的最新主流。这是因为原生数据可以真正规模化,非监督学习(其实是“自监督”学习)自然唱起了主角。

我是小雅,上面分享的是立委先生每次两分钟的有角度的思考,希望对您有启发,咱们下次再见。

我是科技频道小雅,今天播报立委先生的短评《谈谈端到端 第二部分》。

接着上次的话题深入谈谈端到端与大模型的关系。

声称大模型是通向通用人工智能(A-G-I)最靠谱的道路,里面有个预设。预设是 A-G-I 是可以端到端定义完备的,如果有人能指出人类智能的一些东西是无法用端到端数据来定义,譬如 灵感、爱情、自我意识等。那么我们起码可以得出结论: 大模型走出来的AGI 是不完备的,有某种异于人类智能的根本缺陷。

但其实,做这样的挑战也不容易,因为你可以 argue 智能 x(例如灵感、意识、爱情)不能用端到端数据来表示,但你必须用其他方式给x做一个形式化的清晰定义,否则就难免玩弄概念之嫌。你的智能x需要具有可证伪性,而不是张口一说,宣称一下此乃人类专属,机器模型无从染指。有谁清晰定义清楚过非端到端的x吗?

也许限于孤陋寡闻,我没看到。谈人类智能的不少吧,但具有可证伪性的智能定义,除去端到端数据的方式的,还有哪些?

这是从任务角度看 end to end : 如前所述,它实际上成为智能任务定义的通用范式。

从系统的内部构造与实现智能任务的角度看,端到端模型是与 pipeline 的显式模块化模型相对照的,这是多数人理解的端到端系统,近义词就是“黑盒子”。

pipeline系统与端到端系统不同,pipeline内部的模态之间有可以清晰定义的中间结果作为连接模块之间的接口纽带,可以透明化做内部模块的测试,而大模型 或 多层神经网络 的系统,则难以清晰定义内部的接口。例如大模型里面的前馈多层网络看上去也是一个内部的 pipeline,但层与层之间是很难定义清楚接口的,最多是个模糊的大概。例如,原则上我们知道,后层的网络比前层的网络具有更抽象的意义或结构表示。不过这通常并不影响我们达成我们的端到端定义的任务目标,只是可解释性和定点纠错的能力确实不如透明的符号逻辑的pipeline系统。

无论大模型,还是传统机器学习,还是老古董的符号逻辑系统,都可以在通用的端到端数据集定义的任务上,一比高下。数据集一分为二, 一部分做训练集或开发集,另一部分做盲测集。历史选择的结果是,可以利用超出训练集外的原生大数据的大模型是压倒性的。这里面的奥秘值得写一篇专门的文章去进一步阐释、解说,迄今科学传播在揭示大模型奥秘上做得还很不够。

老友问:这里有个问题,为什么要把自学习归类为非监督的学习?NTP不是明显的监督学习么,每次都在用NT作为答案去降低loss。只不过这种监督数据比较特殊,是人类的一种活动中自然而然的留下了,不需要额外标注。

这是个非常好的问题。其实非监督学习有两个定义,一个是用否人工标注数据作为监督和非监督的分界 。另一个是用学习的目标函数来定义:有绝对黄金标准的学习是监督学习,反之是非监督学习,后者的一个典型任务是所谓聚类(clustering)。clustering 与 classification 最大的区别是前者无监督,结果没有预定,而后者是先有 labels 然后去分类。

这个话题值得深究。从学习的机理上看,“自监督学习”(即,自学习)本质上也是一种监督学习。自学习大模型的成功,实际上在深度学习革命搞定监督学习的那一天就注定了自己的宿命或使命,深层神经网络是天选,是有通用近似定理保障的“天机”。只不过,理解这一点需要对黄金标准做一个拓展。自学习实际上是监督学习与非监督学习的某种结合。

第一,自学习是自己给自己做标准,属于所谓回归类训练,只有正例,没有负例,这一点是非监督学习的特征。自学习是回归类预测模型,这与典型的clustering是一致的:就是说,在学习之前,并不存在一个固定的黄金标准。

第二,在自学习 利用了而且最大化了监督学习的无限逼近的能力方面,也就是在遵循了万能近似定理所决定的规模化法则(scaling law)方面,在实现层面,一线研发人员发明了一个技巧,所谓“掩码”,就是遮住部分字符让系统去猜谜,类似于填字游戏或接龙游戏。这样就把没有labels的自学习,转换成了有目标的监督学习范式。

著名的下一词预测(ntp)的自回归接龙式训练,是掩码学习的最成功的范例,成就了GPT的主流霸主地位。这里面的奥秘就是所谓K氏复杂性理论,可以视为万能近似定理的一个细化版本,为解决非监督学习的通用大模型搞定端到端任务提供了理论基础。
上面的说法搞不懂也没关系,本来就不是很好理解的理论。但结果是很明显的,人类只要说话或写文章,就是在给大模型做标注。自学习突破了人工标注的知识瓶颈,把组织化的人工劳动改变为 自然产生的原生数据。这一优势怎么强调也不过分,算是“免费午餐”的极致,“顺风车”的天花板,非监督学习通向监督学习的桥梁。做到这一点的前提是模型的通用化(所谓“预训练“)以及定向任务通过微调的零样本化。关于少样本实现零样本的微调(fine tune)这个大模型的知识点,以前多次提过,也有很多人做过解说,此处不赘。

总之,哇塞,跨模态大模型的奇迹就这样诞生了。跨模态大模型的自学习的养料或饲料,已经不仅是人类说话、写文章了,人类的一切自然表现的智能活动,无论画画、配词、作曲、导演、编辑等等,它们所产生的数据都是自学习的饲料。不仅如此,现在已经发展到非人类的产出,由模型再生的数据,也可以反哺给自学习来自提升。

我是小雅,上面分享的是立委先生每次几分钟的有角度的思考,希望对您有启发,咱们下次再见。

 

 

【相关】

Suno prompt engineering tips

 

chinese traditional folk

chinese traditional folk, slow, Ethereal, angelic voice, zheng

Chinese Folk. Percussive. Beat Drop. slow. guzheng

chinese folk. Inspired Pop Ballad. Clear Voice, Moderate tempo

chinese traditional folk, slow, Ethereal, angelic voice, zheng

 

 

 

 

Chinese Folk. Dark Pop. Percussive. Beat Drop

Chinese Folk. Pop Rock. Percussive. Beat Drop. guzheng

chineses Folk. Cinematic Rock. Epic

 

 

国语 playlists:

https://suno.com/playlist/945eb006-705b-4a74-a5b4-5d8e4afc4fad