Navigating the Probability Universe with GPT

Every sentence has a unique address in a probability universe, a number line from 0 to 1. GPT maps texts to these addresses, compressing them into compact codes. How does this cosmic navigation work, and why is it a breakthrough for compression?

Mapping Sentences to Intervals

Each sequence corresponds to a unique interval in [0, 1), with its length equaling the sequence’s probability. For “cat eats fish” (P(“cat”)=0.5, P(“eats” | “cat”)=0.7, P(“fish” | “cat eats”)=0.4), the interval is [0, 0.14), with length 0.5 * 0.7 * 0.4 = 0.14. Arithmetic coding narrows this interval step-by-step, outputting a binary number. Decompression retraces the path, ensuring perfection. Why are these intervals unique?

The Power of Information Theory

The interval’s length reflects the sequence’s probability, with high-probability sequences needing fewer bits (-log₂(0.14) ≈ 2.84 bits). This approaches Shannon’s entropy limit, where GPT’s precise predictions minimize bits for semantic data. Why does predictability reduce bit requirements?

Why It’s Revolutionary

Unlike traditional methods (e.g., Huffman coding), GPT’s approach handles continuous streams and leverages semantic patterns, making it ideal for texts. What data types might benefit most, and how could this evolve with better models?

Original post: https://liweinlp.com/13275

 

发布者

立委

立委博士,多模态大模型应用高级咨询。出门问问大模型团队前工程副总裁,聚焦大模型及其AIGC应用。Netbase前首席科学家10年,期间指挥研发了18种语言的理解和应用系统,鲁棒、线速,scale up to 社会媒体大数据,语义落地到舆情挖掘产品,成为美国NLP工业落地的领跑者。Cymfony前研发副总八年,曾荣获第一届问答系统第一名(TREC-8 QA Track),并赢得17个小企业创新研究的信息抽取项目(PI for 17 SBIRs)。

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

这个站点使用 Akismet 来减少垃圾评论。了解你的评论数据如何被处理