LOCA-bench: Benchmarking Language Agents Under Controllable and Extreme Context Growth
- URL: http://arxiv.org/abs/2602.07962v1
- Date: Sun, 08 Feb 2026 13:20:39 GMT
- Title: LOCA-bench: Benchmarking Language Agents Under Controllable and Extreme Context Growth
- Authors: Weihao Zeng, Yuzhen Huang, Junxian He,
- Abstract summary: Large language models (LLMs) are increasingly capable of carrying out long-running, real-world tasks.<n>As the amount of context grows, their reliability often deteriorates, a phenomenon known as "context rot"
- Score: 32.1520194112537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) are increasingly capable of carrying out long-running, real-world tasks. However, as the amount of context grows, their reliability often deteriorates, a phenomenon known as "context rot". Existing long-context benchmarks primarily focus on single-step settings that evaluate a model's ability to retrieve information from a long snippet. In realistic scenarios, however, LLMs often need to act as agents that explore environments, follow instructions and plans, extract useful information, and predict correct actions under a dynamically growing context. To assess language agents in such settings, we introduce LOCA-bench (a benchmark for LOng-Context Agents). Given a task prompt, LOCA-bench leverages automated and scalable control of environment states to regulate the agent's context length. This design enables LOCA-bench to extend the context length potentially to infinity in a controlled way while keeping the underlying task semantics fixed. LOCA-bench evaluates language agents as a combination of models and scaffolds, including various context management strategies. While agent performance generally degrades as the environment states grow more complex, advanced context management techniques can substantially improve the overall success rate. We open-source LOCA-bench to provide a platform for evaluating models and scaffolds in long-context, agentic scenarios: https://github.com/hkust-nlp/LOCA-bench
Related papers
- AgentLongBench: A Controllable Long Benchmark For Long-Contexts Agents via Environment Rollouts [78.33143446024485]
We introduce textbfAgentLongBench, which evaluates agents through simulated environment rollouts based on Lateral Thinking Puzzles.<n>This framework generates rigorous interaction trajectories across knowledge-intensive and knowledge-free scenarios.
arXiv Detail & Related papers (2026-01-28T16:05:44Z) - ABC-Bench: Benchmarking Agentic Backend Coding in Real-World Development [72.4729759618632]
We introduce ABC-Bench, a benchmark to evaluate agentic backend coding within a realistic, executable workflow.<n>We curated 224 practical tasks spanning 8 languages and 19 frameworks from open-source repositories.<n>Our evaluation reveals that even state-of-the-art models struggle to deliver reliable performance on these holistic tasks.
arXiv Detail & Related papers (2026-01-16T08:23:52Z) - InfiAgent: An Infinite-Horizon Framework for General-Purpose Autonomous Agents [36.740230738304525]
InfiAgent keeps the agent's reasoning context strictly bounded regardless of task duration.<n>InfiAgent with a 20B open-source model is competitive with larger proprietary systems.
arXiv Detail & Related papers (2026-01-06T17:35:57Z) - CuES: A Curiosity-driven and Environment-grounded Synthesis Framework for Agentic RL [35.086788669916594]
Large language model based agents are increasingly deployed in complex, tool augmented environments.<n>Existing approaches typically assume predefined task collections, an assumption that fails in novel environments.<n>We propose CuES, a Curiosity driven and Environment grounded Synthesis framework that autonomously generates diverse, executable, and meaningful tasks.
arXiv Detail & Related papers (2025-12-01T06:11:37Z) - Scaling Long-Horizon LLM Agent via Context-Folding [46.685552398338295]
We introduce Context-Folding, a framework that empowers agents to actively manage their working context.<n>An agent can procedurally branch into a sub-trajectory to handle a subtask and then fold it upon completion, collapsing the intermediate steps while retaining a concise summary of the outcome.
arXiv Detail & Related papers (2025-10-13T22:00:58Z) - Scaling LLM Multi-turn RL with End-to-end Summarization-based Context Management [19.980762483472354]
We introduce summarization-based context management to training.<n>We instantiate this framework with underlineSUmmarization augmented underlinePolicy underlineOptimization (textttSUPO)<n>Our results establish summarization-based context management as a principled and scalable approach for training RL agents beyond a fixed context length limit.
arXiv Detail & Related papers (2025-10-08T07:29:22Z) - Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models [18.829572148850563]
We introduce ACE (Agentic Context Engineering), a framework that treats contexts as evolving playbooks.<n>Across agent and domain-specific benchmarks, ACE consistently outperforms strong baselines.<n> ACE could adapt effectively without labeled supervision and instead by leveraging natural execution feedback.
arXiv Detail & Related papers (2025-10-06T09:30:18Z) - ContextNav: Towards Agentic Multimodal In-Context Learning [85.05420047017513]
ContextNav is an agentic framework that integrates the scalability of automated retrieval with the quality and adaptiveness of human-like curation.<n>It builds a resource-aware multimodal embedding pipeline, maintains a retrievable vector database, and applies agentic retrieval and structural alignment to construct noise-resilient contexts.<n> Experimental results demonstrate that ContextNav achieves state-of-the-art performance across various datasets.
arXiv Detail & Related papers (2025-10-06T07:49:52Z) - A Controllable Examination for Long-Context Language Models [62.845852724511964]
This study introduces $textbfLongBioBench, a benchmark for evaluating long-context language models.<n>We show that most models still exhibit deficiencies in semantic understanding and elementary reasoning over retrieved results.<n>Our further analysis indicates some design choices employed by existing synthetic benchmarks, such as contextual non-coherence.
arXiv Detail & Related papers (2025-06-03T14:23:06Z) - NeedleBench: Evaluating LLM Retrieval and Reasoning Across Varying Information Densities [51.07379913779232]
NeedleBench is a framework for assessing retrieval and reasoning performance in long-context tasks.<n>It embeds key data points at varying depths to rigorously test model capabilities.<n>Our experiments reveal that reasoning models like Deep-R1 and OpenAI's o3 struggle with continuous retrieval and reasoning in information-dense scenarios.
arXiv Detail & Related papers (2024-07-16T17:59:06Z) - Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? [54.667202878390526]
Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases.
We introduce LOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs' performance on in-context retrieval and reasoning.
Our findings reveal LCLMs' surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks.
arXiv Detail & Related papers (2024-06-19T00:28:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.