DeepSearchQA: Bridging the Comprehensiveness Gap for Deep Research Agents
- URL: http://arxiv.org/abs/2601.20975v1
- Date: Wed, 28 Jan 2026 19:20:47 GMT
- Title: DeepSearchQA: Bridging the Comprehensiveness Gap for Deep Research Agents
- Authors: Nikita Gupta, Riju Chatterjee, Lukas Haas, Connie Tao, Andrew Wang, Chang Liu, Hidekazu Oiwa, Elena Gribovskaya, Jan Ackermann, John Blitzer, Sasha Goldshtein, Dipanjan Das,
- Abstract summary: DeepSearchQA is a 900-prompt benchmark for evaluating agents on difficult multi-step information-seeking tasks.<n>This dataset is designed to evaluate an agent's ability to execute complex search plans to generate exhaustive answer lists.
- Score: 10.197402632091551
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce DeepSearchQA, a 900-prompt benchmark for evaluating agents on difficult multi-step information-seeking tasks across 17 different fields. Unlike traditional benchmarks that target single answer retrieval or broad-spectrum factuality, DeepSearchQA features a dataset of challenging, handcrafted tasks designed to evaluate an agent's ability to execute complex search plans to generate exhaustive answer lists. This shift in design explicitly tests three critical, yet under-evaluated capabilities: 1) systematic collation of fragmented information from disparate sources, 2) de-duplication and entity resolution to ensure precision, and 3) the ability to reason about stopping criteria within an open-ended search space. Each task is structured as a causal chain, where discovering information for one step is dependent on the successful completion of the previous one, stressing long-horizon planning and context retention. All tasks are grounded in the open web with objectively verifiable answer sets. Our comprehensive evaluation of state-of-the-art agent architectures reveals significant performance limitations: even the most advanced models struggle to balance high recall with precision. We observe distinct failure modes ranging from premature stopping (under-retrieval) to hedging behaviors, where agents cast an overly wide net of low-confidence answers to artificially boost recall. These findings highlight critical headroom in current agent designs and position DeepSearchQA as an essential diagnostic tool for driving future research toward more robust, deep-research capabilities.
Related papers
- BeyondSWE: Can Current Code Agent Survive Beyond Single-Repo Bug Fixing? [61.247730037229815]
We introduce BeyondSWE, a comprehensive benchmark that broadens existing evaluations along two axes - resolution scope and knowledge scope.<n>To investigate the role of external knowledge, we develop SearchSWE, a framework that integrates deep search with coding abilities.<n>This work offers both a realistic, challenging evaluation benchmark and a flexible framework to advance research toward more capable code agents.
arXiv Detail & Related papers (2026-03-03T17:52:01Z) - GISA: A Benchmark for General Information-Seeking Assistant [102.30831921333755]
GISA is a benchmark for General Information-Seeking Assistants comprising 373 human-crafted queries.<n>It integrates both deep reasoning and broad information aggregation within unified tasks, and includes a live subset with periodically updated answers to resist memorization.<n>Experiments on mainstream LLMs and commercial search products reveal that even the best-performing model achieves only 19.30% exact match score.
arXiv Detail & Related papers (2026-02-09T11:44:15Z) - Yunque DeepResearch Technical Report [12.184074646161223]
Yunque DeepResearch is a hierarchical, modular, and robust framework for deep research.<n>It achieves state-of-the-art performance across a range of agentic deep research benchmarks.<n>We open-source the framework, reproducible implementations, and application cases to empower the community.
arXiv Detail & Related papers (2026-01-27T13:10:00Z) - SAGE: Steerable Agentic Data Generation for Deep Search with Execution Feedback [68.60326181052658]
We propose an agentic pipeline that automatically generates high quality, difficulty-controlled deep search question-answer pairs.<n>Our pipeline, SAGE, consists of a data generator which proposes QA pairs and a search agent which attempts to solve the generated question.<n>Our intrinsic evaluation shows SAGE generates questions that require diverse reasoning strategies, while significantly increases the correctness and difficulty of the generated data.
arXiv Detail & Related papers (2026-01-26T06:37:56Z) - DeepResearchEval: An Automated Framework for Deep Research Task Construction and Agentic Evaluation [56.886936435727854]
DeepResearchEval is an automated framework for deep research task construction and agentic evaluation.<n>For task construction, we propose a persona-driven pipeline generating realistic, complex research tasks anchored in diverse user profiles.<n>For evaluation, we propose an agentic pipeline with two components: an Adaptive Point-wise Quality Evaluation that dynamically derives task-specific evaluation dimensions, criteria, and weights conditioned on each generated task, and an Active Fact-Checking that autonomously extracts and verifies report statements via web search, even when citations are missing.
arXiv Detail & Related papers (2026-01-14T18:38:31Z) - SmartSearch: Process Reward-Guided Query Refinement for Search Agents [63.46067892354375]
Large language model (LLM)-based search agents have proven promising for addressing knowledge-intensive problems.<n>Existing works largely focus on optimizing the reasoning paradigms of search agents, yet the quality of intermediate search queries during reasoning remains overlooked.<n>We introduce SmartSearch, a framework built upon two key mechanisms to mitigate this issue.
arXiv Detail & Related papers (2026-01-08T12:39:05Z) - Deep Research: A Systematic Survey [118.82795024422722]
Deep Research (DR) aims to combine the reasoning capabilities of large language models with external tools, such as search engines.<n>This survey presents a comprehensive and systematic overview of deep research systems.
arXiv Detail & Related papers (2025-11-24T15:28:28Z) - DeepWideSearch: Benchmarking Depth and Width in Agentic Information Seeking [42.413184411326164]
DeepWideSearch is the first benchmark designed to evaluate agents to integrate depth and width in information seeking.<n>In DeepWideSearch, agents must process a large volume of data, each requiring deep reasoning over multi-hop retrieval paths.<n>Experiments demonstrate that even state-of-the-art agents achieve only 2.39% average success rate.
arXiv Detail & Related papers (2025-10-23T03:28:45Z) - Demystifying deep search: a holistic evaluation with hint-free multi-hop questions and factorised metrics [89.1999907891494]
We present WebDetective, a benchmark of hint-free multi-hop questions paired with a controlled Wikipedia sandbox.<n>Our evaluation of 25 state-of-the-art models reveals systematic weaknesses across all architectures.<n>We develop an agentic workflow, EvidenceLoop, that explicitly targets the challenges our benchmark identifies.
arXiv Detail & Related papers (2025-10-01T07:59:03Z) - Mind2Web 2: Evaluating Agentic Search with Agent-as-a-Judge [34.672897171399775]
Agentic search systems autonomously browse the web, synthesize information, and return comprehensive citation-backed answers.<n>Mind2Web 2 is a benchmark of 130 realistic, high-quality, and long-horizon tasks constructed with over 1000 hours of human labor.<n>Our method constructs task-specific judge agents based on a tree-structured design to automatically assess both answer correctness and source attribution.
arXiv Detail & Related papers (2025-06-26T17:32:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.