New KV cache compaction technique cuts LLM memory 50x without accuracy loss

Enterprise AI applications that handle large documents or long-horizon tasks face a severe memory bottleneck.