AgentHallu: Benchmarking Automated Hallucination Attribution of LLM-based Agents
- URL: http://arxiv.org/abs/2601.06818v1
- Date: Sun, 11 Jan 2026 09:04:26 GMT
- Title: AgentHallu: Benchmarking Automated Hallucination Attribution of LLM-based Agents
- Authors: Xuannan Liu, Xiao Yang, Zekun Li, Peipei Li, Ran He,
- Abstract summary: hallucination detection in single-turn responses requires identifying which step causes the initial divergence.<n>We propose a new research task, automated hallucination attribution of LLM-based agents, aiming to identify the step responsible for the hallucination and explain why.<n>We introduce AgentHallu, a comprehensive benchmark with 693 high-quality trajectories spanning 7 agent frameworks and 5 domains.<n>The best-performing model achieves only 41.1% step localization accuracy, where tool-use hallucinations are the most challenging at just 11.6%.
- Score: 30.66751974860931
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As LLM-based agents operate over sequential multi-step reasoning, hallucinations arising at intermediate steps risk propagating along the trajectory, thus degrading overall reliability. Unlike hallucination detection in single-turn responses, diagnosing hallucinations in multi-step workflows requires identifying which step causes the initial divergence. To fill this gap, we propose a new research task, automated hallucination attribution of LLM-based agents, aiming to identify the step responsible for the hallucination and explain why. To support this task, we introduce AgentHallu, a comprehensive benchmark with: (1) 693 high-quality trajectories spanning 7 agent frameworks and 5 domains, (2) a hallucination taxonomy organized into 5 categories (Planning, Retrieval, Reasoning, Human-Interaction, and Tool-Use) and 14 sub-categories, and (3) multi-level annotations curated by humans, covering binary labels, hallucination-responsible steps, and causal explanations. We evaluate 13 leading models, and results show the task is challenging even for top-tier models (like GPT-5, Gemini-2.5-Pro). The best-performing model achieves only 41.1\% step localization accuracy, where tool-use hallucinations are the most challenging at just 11.6\%. We believe AgentHallu will catalyze future research into developing robust, transparent, and reliable agentic systems.
Related papers
- LLM-based Agents Suffer from Hallucinations: A Survey of Taxonomy, Methods, and Directions [80.12078194093013]
We present the first comprehensive survey of hallucinations in LLM-based agents.<n>We propose a new taxonomy that identifies different types of agent hallucinations occurring at different stages.<n>We conduct an in-depth examination of eighteen triggering causes underlying the emergence of agent hallucinations.
arXiv Detail & Related papers (2025-09-23T13:24:48Z) - MIRAGE-Bench: LLM Agent is Hallucinating and Where to Find Them [52.764019220214344]
Hallucinations pose critical risks for large language model (LLM)-based agents.<n>We present MIRAGE-Bench, the first unified benchmark for eliciting and evaluating hallucinations in interactive environments.
arXiv Detail & Related papers (2025-07-28T17:38:29Z) - Towards Mitigation of Hallucination for LLM-empowered Agents: Progressive Generalization Bound Exploration and Watchdog Monitor [18.9616029343245]
hallucinations generated by large language models (LLMs) undermine the credibility of intelligent agents.<n>HalMit is a novel black-box watchdog framework that models the generalization bound of LLM-empowered agents.
arXiv Detail & Related papers (2025-07-21T09:08:58Z) - HEAL: An Empirical Study on Hallucinations in Embodied Agents Driven by Large Language Models [27.72821031361892]
We present the first systematic study of hallucinations in large language models performing long-horizon tasks under scene-task inconsistencies.<n>Our goal is to understand to what extent hallucinations occur, what types of inconsistencies trigger them, and how current models respond.
arXiv Detail & Related papers (2025-06-18T02:13:41Z) - HalluLens: LLM Hallucination Benchmark [49.170128733508335]
Large language models (LLMs) often generate responses that deviate from user input or training data, a phenomenon known as "hallucination"<n>This paper introduces a comprehensive hallucination benchmark, incorporating both new extrinsic and existing intrinsic evaluation tasks.
arXiv Detail & Related papers (2025-04-24T13:40:27Z) - Why Do Multi-Agent LLM Systems Fail? [87.90075668488434]
We introduce MAST-Data, a comprehensive dataset of 1600+ annotated traces collected across 7 popular MAS frameworks.<n>We build the first Multi-Agent System Failure taxonomy (MAST)<n>We leverage MAST and MAST-Data to analyze failure patterns across models (GPT4, Claude 3, Qwen2.5, CodeLlama) and tasks (coding, math, general agent)
arXiv Detail & Related papers (2025-03-17T19:04:38Z) - SelfCheckAgent: Zero-Resource Hallucination Detection in Generative Large Language Models [0.16385815610837165]
SelfCheckAgent is a novel framework integrating three different agents.<n>These agents provide a robust multi-dimensional approach to hallucination detection.<n>The framework also incorporates a triangulation strategy, which increases the strengths of the SelfCheckAgent.
arXiv Detail & Related papers (2025-02-03T20:42:32Z) - FG-PRM: Fine-grained Hallucination Detection and Mitigation in Language Model Mathematical Reasoning [18.927164579769066]
Existing approaches primarily detect the presence of hallucinations but lack a nuanced understanding of their types and manifestations.<n>We introduce a comprehensive taxonomy that categorizes the common hallucinations in mathematical reasoning tasks into six types.<n>We then propose FG-PRM, an augmented model designed to detect and mitigate hallucinations in a fine-grained, step-level manner.
arXiv Detail & Related papers (2024-10-08T19:25:26Z) - Textualized Agent-Style Reasoning for Complex Tasks by Multiple Round LLM Generation [49.27250832754313]
We present AgentCOT, a llm-based autonomous agent framework.
At each step, AgentCOT selects an action and executes it to yield an intermediate result with supporting evidence.
We introduce two new strategies to enhance the performance of AgentCOT.
arXiv Detail & Related papers (2024-09-19T02:20:06Z) - Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector [114.88975874411142]
Hallucination detection is a challenging task for large language models (LLMs)
We propose an autonomous LLM-based agent framework, called HaluAgent.
In HaluAgent, we integrate the LLM, multi-functional toolbox, and design a fine-grained three-stage detection framework.
arXiv Detail & Related papers (2024-06-17T07:30:05Z) - Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models [56.00992369295851]
Open-sourced Large Language Models (LLMs) have achieved great success in various NLP tasks, however, they are still far inferior to API-based models when acting as agents.
This paper delivers three key observations: (1) the current agent training corpus is entangled with both formats following and agent reasoning, which significantly shifts from the distribution of its pre-training data; (2) LLMs exhibit different learning speeds on the capabilities required by agent tasks; and (3) current approaches have side-effects when improving agent abilities by introducing hallucinations.
We propose Agent-FLAN to effectively Fine-tune LANguage models for Agents.
arXiv Detail & Related papers (2024-03-19T16:26:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.