The claim that “compression is intelligence” sparks debate: does GPT compress data perfectly, or does it lose something along the way? Some argue it’s lossy, like a compressed JPEG, while others insist it’s lossless, restoring every bit. The answer hinges on a key distinction: GPT’s training versus its use as a compressor. Let’s unravel this mystery.
The Heart of Compression: Kolmogorov Complexity
Kolmogorov complexity defines a data’s essence as the shortest program to generate it—an uncomputable ideal. GPT’s next-token prediction approximates this, acting like a “prophet” forecasting sequences based on its world model. This predictive power drives from compression. How does predicting the next word relate to shrinking data size?
Lossless Compression in Action
Using GPT for compressing a tring of target sequence data is lossless, meaning the original data can be perfectly restored. Experiments like ts_zip (Fabrice Bellard) and Li Ming & Nick’s 2022-2023 work show GPT with arithmetic coding outperforming gzip, sometimes by 10x, in high-transmission-cost scenarios like interstellar communication. Here’s why it’s lossless:
- Mechanism: GPT provides probabilities (e.g., P(“will” | “Artificial intelligence”)=0.8), which arithmetic coding uses to encode input sequences into a binary number. Decompression uses the same model to reverse the process, ensuring bit-level accuracy.
- Evidence: Even low-probability tokens are encoded with more bits, preserving all information.
Why might some confuse this with lossy compression?
Training vs. Compression
The confusion arises from GPT’s training, where it abstracts vast data into a simplified world model—a lossy process, like summarizing a library. But compression using this model encodes specific data losslessly. How does this distinction clarify the debate?
Practical Implications
This approach excels for language data (e.g., texts, logs) but struggles with random noise, where complexity equals length. Scenarios like space missions, data archives could leverage this.
Original post: https://liweinlp.com/13272