Thursday, January 22 15:05:37 Morning Briefing San Francisco

Independent newsroom

Times of AI

Signals, investigations, and field notes on the AI economy.

Back to home

Frontier Labs

United States

MIT Researchers Break the Long-Context Barrier with Recursive Self-Querying Framework

A recursive framework treats massive documents as variables, using targeted queries and sub-LLMs to preserve accuracy beyond 10 million tokens.

Tech Insights Reporter Jan 22, 2026 5 min read Cambridge, MA

Cambridge, Massachusetts, January 22, 2026 - A team at MIT CSAIL has demonstrated a breakthrough method for processing contexts exceeding 10 million tokens without sacrificing performance.

Rather than extending context windows or relying on lossy summarization, the recursive framework treats massive documents as persistent variables. The LLM generates precise code to query specific sections, then spawns lightweight sub-LLMs to verify extracted information. This approach maintains near-perfect retrieval accuracy even on book-length or multi-book inputs.

Potential applications span legal discovery, scientific literature review, and genomic analysis — domains where complete context fidelity is critical.

Credit: MIT CSAIL research team. Primary source: Technical report and VentureBeat coverage.