Value of Information: A Framework for Human-Agent Communication
- URL: http://arxiv.org/abs/2601.06407v1
- Date: Sat, 10 Jan 2026 03:07:41 GMT
- Title: Value of Information: A Framework for Human-Agent Communication
- Authors: Yijiang River Dong, Tiancheng Hu, Zheng Hui, Caiqi Zhang, Ivan Vulić, Andreea Bobu, Nigel Collier,
- Abstract summary: Large Language Model (LLM) agents face a fundamental dilemma: user requests are underspecified, yet agents must decide whether to act on incomplete information or interrupt users for clarification.<n>We introduce a decision-theoretic framework that resolves this trade-off through the Value of Information (VoI)<n>We show that VoI consistently matches or exceeds the best manually-tuned baselines, achieving up to 1.36 utility points higher in high-cost settings.
- Score: 34.068772934008244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Model (LLM) agents deployed for real-world tasks face a fundamental dilemma: user requests are underspecified, yet agents must decide whether to act on incomplete information or interrupt users for clarification. Existing approaches either rely on brittle confidence thresholds that require task-specific tuning, or fail to account for the varying stakes of different decisions. We introduce a decision-theoretic framework that resolves this trade-off through the Value of Information (VoI), enabling agents to dynamically weigh the expected utility gain from asking questions against the cognitive cost imposed on users. Our inference-time method requires no hyperparameter tuning and adapts seamlessly across contexts-from casual games to medical diagnosis. Experiments across four diverse domains (20 Questions, medical diagnosis, flight booking, and e-commerce) show that VoI consistently matches or exceeds the best manually-tuned baselines, achieving up to 1.36 utility points higher in high-cost settings. This work provides a parameter-free framework for adaptive agent communication that explicitly balances task risk, query ambiguity, and user effort.
Related papers
- The PROPER Approach to Proactivity: Benchmarking and Advancing Knowledge Gap Navigation [17.97529450470058]
Most language-based assistants follow a reactive ask-and-respond paradigm, requiring users to explicitly state their needs.<n>We introduce ProPer, a novel two-agent architecture consisting of a Dimension Generating Agent (DGA) and a Response Generating Agent (RGA)<n>RGA balances explicit and implicit dimensions to tailor personalized responses with timely and proactive interventions.<n>Our results show that ProPer improves quality scores and win rates across all domains, achieving up to 84% gains in single-turn evaluation and consistent dominance in multi-turn interactions.
arXiv Detail & Related papers (2026-01-14T23:13:01Z) - MAC: A Multi-Agent Framework for Interactive User Clarification in Multi-turn Conversations [46.70182219204539]
We propose an interactive multi-agent framework specifically optimized to resolve user ambiguities by strategically managing clarification dialogues.<n> Empirical evaluations on MultiWOZ 2.4 demonstrate that enabling clarification at both levels increases task success rate 7.8% (54.5 to 62.3) and reduces the average number of dialogue turns (6.53 to 4.86) by eliciting all required user information up front and minimizing repetition.
arXiv Detail & Related papers (2025-12-15T10:02:50Z) - Learning Steerable Clarification Policies with Collaborative Self-play [67.67872810596839]
To handle ambiguous queries, AI assistants need a policy for managing their uncertainty.<n>We propose to train steerable policies for managing this uncertainty using self-play.<n>We show this leads to a steerable policy that changes its behavior predictably conditioned on the provided costs.
arXiv Detail & Related papers (2025-12-03T18:49:54Z) - Uncertainty-Aware GUI Agent: Adaptive Perception through Component Recommendation and Human-in-the-Loop Refinement [11.63498742723335]
We present textbfRecAgent, an uncertainty-aware agent that addresses these issues through adaptive perception.<n>To reduce perceptual uncertainty, RecAgent employs a component recommendation mechanism that identifies and focuses on the most relevant UI elements.<n>For decision uncertainty, it uses an interactive module to request user feedback in ambiguous situations, enabling intent-aware decisions.
arXiv Detail & Related papers (2025-08-06T02:38:02Z) - Program Synthesis Dialog Agents for Interactive Decision-Making [16.916736716463284]
We propose BeNYfits, a new benchmark for determining user eligibility for social benefits opportunities through interactive decision-making.<n>Our experiments show that GPT-4o scoring only 35.7 F1 using a ReAct-style chain-of-thought.<n>Our agent, ProADA, improves the F1 score to 55.6 while maintaining nearly the same number of dialog turns.
arXiv Detail & Related papers (2025-02-26T22:53:01Z) - Interactive Agents to Overcome Ambiguity in Software Engineering [61.40183840499932]
AI agents are increasingly being deployed to automate tasks, often based on ambiguous and underspecified user instructions.<n>Making unwarranted assumptions and failing to ask clarifying questions can lead to suboptimal outcomes.<n>We study the ability of LLM agents to handle ambiguous instructions in interactive code generation settings by evaluating proprietary and open-weight models on their performance.
arXiv Detail & Related papers (2025-02-18T17:12:26Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - INSCIT: Information-Seeking Conversations with Mixed-Initiative
Interactions [47.90088587508672]
InSCIt is a dataset for Information-Seeking Conversations with mixed-initiative Interactions.
It contains 4.7K user-agent turns from 805 human-human conversations.
We report results of two systems based on state-of-the-art models of conversational knowledge identification and open-domain question answering.
arXiv Detail & Related papers (2022-07-02T06:18:12Z) - Formalizing the Problem of Side Effect Regularization [81.97441214404247]
We propose a formal criterion for side effect regularization via the assistance game framework.
In these games, the agent solves a partially observable Markov decision process.
We show that this POMDP is solved by trading off the proxy reward with the agent's ability to achieve a range of future tasks.
arXiv Detail & Related papers (2022-06-23T16:36:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.