New KV cache compaction technique cuts LLM memory 50x without accuracy loss
Enterprise AI applications that handle large documents or long-horizon tasks face a severe memory bottleneck. As the context grows longer, so does the KV cache, the area where the model’s working me...