AceSearcher: Bootstrapping Reasoning and Search for LLMs via Reinforced Self-Play
- URL: http://arxiv.org/abs/2509.24193v1
- Date: Mon, 29 Sep 2025 02:14:30 GMT
- Title: AceSearcher: Bootstrapping Reasoning and Search for LLMs via Reinforced Self-Play
- Authors: Ran Xu, Yuchen Zhuang, Zihan Dong, Jonathan Wang, Yue Yu, Joyce C. Ho, Linjun Zhang, Haoyu Wang, Wenqi Shi, Carl Yang,
- Abstract summary: AceSearcher trains a single large language model (LLM) to alternate between two roles: a decomposer that breaks down complex queries and a solver that integrates retrieved contexts for answer generation.<n>Experiments on three reasoning-intensive tasks across 10 datasets show that AceSearcher outperforms state-of-the-art baselines, achieving an average exact match improvement of 7.6%.
- Score: 45.02121903138421
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Search-augmented LLMs often struggle with complex reasoning tasks due to ineffective multi-hop retrieval and limited reasoning ability. We propose AceSearcher, a cooperative self-play framework that trains a single large language model (LLM) to alternate between two roles: a decomposer that breaks down complex queries and a solver that integrates retrieved contexts for answer generation. AceSearcher couples supervised fine-tuning on a diverse mixture of search, reasoning, and decomposition tasks with reinforcement fine-tuning optimized for final answer accuracy, eliminating the need for intermediate annotations. Extensive experiments on three reasoning-intensive tasks across 10 datasets show that AceSearcher outperforms state-of-the-art baselines, achieving an average exact match improvement of 7.6%. Remarkably, on document-level finance reasoning tasks, AceSearcher-32B matches the performance of the DeepSeek-V3 model using less than 5% of its parameters. Even at smaller scales (1.5B and 8B), AceSearcher often surpasses existing search-augmented LLMs with up to 9x more parameters, highlighting its exceptional efficiency and effectiveness in tackling complex reasoning tasks. Our code will be published at https://github.com/ritaranx/AceSearcher and https://huggingface.co/AceSearcher.
Related papers
- Search More, Think Less: Rethinking Long-Horizon Agentic Search for Efficiency and Generalization [64.61432234404276]
emphSearch More, Think Less (SMTL) is a framework for long-horizon agentic search that targets both efficiency and generalization.<n>We train an end-to-end agent using supervised fine-tuning and reinforcement learning, achieving strong and often state of the art performance across benchmarks.
arXiv Detail & Related papers (2026-02-26T06:46:41Z) - Beyond the limitation of a single query: Train your LLM for query expansion with Reinforcement Learning [23.104182075898297]
Reasoning-augmented search agents, such as Search-R1, are trained to reason, search, and generate the final answer iteratively.<n>We train an LLM-based search agent with the native capability of query expansion through reinforcement learning.<n>With the assistance of the squeezer model, we discover that even a small-scale 3B LLM can demonstrate a strong capability of query expansion.
arXiv Detail & Related papers (2025-10-11T04:23:30Z) - Reasoning-enhanced Query Understanding through Decomposition and Interpretation [130.19204432111277]
ReDI is a Reasoning-enhanced approach for query understanding through Decomposition and Interpretation.<n>We compiled a large-scale dataset of real-world complex queries from a major search engine.<n>Experiments on BRIGHT and BEIR demonstrate that ReDI consistently surpasses strong baselines in both sparse and dense retrieval paradigms.
arXiv Detail & Related papers (2025-09-08T10:58:42Z) - ParallelSearch: Train your LLMs to Decompose Query and Search Sub-queries in Parallel with Reinforcement Learning [20.11646932754985]
Reasoning-augmented search agents such as Search-R1 demonstrate remarkable capabilities in multi-step information retrieval from external knowledge sources.<n>Existing approaches suffer from a fundamental architectural limitation: they process search queries strictly sequentially, even when handling inherently parallelizable and logically independent comparisons.<n>We propose ParallelSearch, a novel reinforcement learning framework that empowers large language models to recognize parallelizable query structures and execute multiple search operations concurrently.
arXiv Detail & Related papers (2025-08-12T19:38:21Z) - BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of Deep-Research Agent [74.10138164281618]
BrowseComp-Plus is a benchmark derived from BrowseComp, employing a fixed, carefully curated corpus.<n>This benchmark allows comprehensive evaluation and disentangled analysis of deep research agents and retrieval methods.
arXiv Detail & Related papers (2025-08-08T17:55:11Z) - R-Search: Empowering LLM Reasoning with Search via Multi-Reward Reinforcement Learning [0.8388591755871735]
R-Search is a reinforcement learning framework for Reasoning-Search integration.<n>It guides large language models to autonomously execute multi-step reasoning with deep search interaction.<n>R-Search learns optimal reasoning search interaction trajectories via multi-reward signals.
arXiv Detail & Related papers (2025-06-04T17:29:22Z) - Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning [50.419872452397684]
Search-R1 is an extension of reinforcement learning for reasoning frameworks.<n>It generates search queries during step-by-step reasoning with real-time retrieval.<n>It improves performance by 41% (Qwen2.5-7B) and 20% (Qwen2.5-3B) over various RAG baselines.
arXiv Detail & Related papers (2025-03-12T16:26:39Z) - Autonomous Tree-search Ability of Large Language Models [58.68735916408101]
Large Language Models have excelled in remarkable reasoning capabilities with advanced prompting techniques.
Recent works propose to utilize external programs to define search logic, such that LLMs can perform passive tree search to solve more challenging reasoning tasks.
We propose a new concept called autonomous tree-search ability of LLM, which can automatically generate a response containing search trajectories for the correct answer.
arXiv Detail & Related papers (2023-10-14T14:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.