Is GPT Compression Lossless or Lossy? The Truth Revealed

The claim that “compression is intelligence” sparks debate: does GPT compress data perfectly, or does it lose something along the way? Some argue it’s lossy, like a compressed JPEG, while others insist it’s lossless, restoring every bit. The answer hinges on a key distinction: GPT’s training versus its use as a compressor. Let’s unravel this mystery.

The Heart of Compression: Kolmogorov Complexity

Kolmogorov complexity defines a data’s essence as the shortest program to generate it—an uncomputable ideal. GPT’s next-token prediction approximates this, acting like a “prophet” forecasting sequences based on its world model. This predictive power drives from compression. How does predicting the next word relate to shrinking data size?

Lossless Compression in Action

Using GPT for compressing a tring of target sequence data is lossless, meaning the original data can be perfectly restored. Experiments like ts_zip (Fabrice Bellard) and Li Ming & Nick’s 2022-2023 work show GPT with arithmetic coding outperforming gzip, sometimes by 10x, in high-transmission-cost scenarios like interstellar communication. Here’s why it’s lossless:

  • Mechanism: GPT provides probabilities (e.g., P(“will” | “Artificial intelligence”)=0.8), which arithmetic coding uses to encode input sequences into a binary number. Decompression uses the same model to reverse the process, ensuring bit-level accuracy.
  • Evidence: Even low-probability tokens are encoded with more bits, preserving all information.

Why might some confuse this with lossy compression?

Training vs. Compression

The confusion arises from GPT’s training, where it abstracts vast data into a simplified world model—a lossy process, like summarizing a library. But compression using this model encodes specific data losslessly. How does this distinction clarify the debate?

Practical Implications

This approach excels for language data (e.g., texts, logs) but struggles with random noise, where complexity equals length. Scenarios like space missions, data archives could leverage this.

Original post: https://liweinlp.com/13272

 

发布者

立委

立委博士,多模态大模型应用高级咨询。出门问问大模型团队前工程副总裁,聚焦大模型及其AIGC应用。Netbase前首席科学家10年,期间指挥研发了18种语言的理解和应用系统,鲁棒、线速,scale up to 社会媒体大数据,语义落地到舆情挖掘产品,成为美国NLP工业落地的领跑者。Cymfony前研发副总八年,曾荣获第一届问答系统第一名(TREC-8 QA Track),并赢得17个小企业创新研究的信息抽取项目(PI for 17 SBIRs)。

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

这个站点使用 Akismet 来减少垃圾评论。了解你的评论数据如何被处理