Intelligent Reasoning Cues: A Framework and Case Study of the Roles of AI Information in Complex Decisions
- URL: http://arxiv.org/abs/2602.00259v1
- Date: Fri, 30 Jan 2026 19:22:23 GMT
- Title: Intelligent Reasoning Cues: A Framework and Case Study of the Roles of AI Information in Complex Decisions
- Authors: Venkatesh Sivaraman, Eric P. Mason, Mengfan Ellen Li, Jessica Tong, Andrew J. King, Jeremy M. Kahn, Adam Perer,
- Abstract summary: We study the role of eight types of reasoning cues in a high-stakes clinical decision.<n>We find that reasoning cues have distinct patterns of influence that can directly inform design.<n>Our results suggest that reasoning cues should prioritize tasks with high variability and discretion, adapt to ensure compatibility with evolving decision needs, and provide complementary, rigorous insights on complex cases.
- Score: 10.853817540556348
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence (AI)-based decision support systems can be highly accurate yet still fail to support users or improve decisions. Existing theories of AI-assisted decision-making focus on calibrating reliance on AI advice, leaving it unclear how different system designs might influence the reasoning processes underneath. We address this gap by reconsidering AI interfaces as collections of intelligent reasoning cues: discrete pieces of AI information that can individually influence decision-making. We then explore the roles of eight types of reasoning cues in a high-stakes clinical decision (treating patients with sepsis in intensive care). Through contextual inquiries with six teams and a think-aloud study with 25 physicians, we find that reasoning cues have distinct patterns of influence that can directly inform design. Our results also suggest that reasoning cues should prioritize tasks with high variability and discretion, adapt to ensure compatibility with evolving decision needs, and provide complementary, rigorous insights on complex cases.
Related papers
- ClearFairy: Capturing Creative Workflows through Decision Structuring, In-Situ Questioning, and Rationale Inference [59.65947911667229]
We present CLEAR framework that structures reasoning into cognitive decision steps-linked units of actions, artifacts, and self-explanations.<n>We introduce ClearFairy, a think-aloud AI assistant for UI design that detects weak explanations, asks lightweight clarifying questions, and infers missing rationales to ease the knowledge-sharing burden.
arXiv Detail & Related papers (2025-09-18T02:11:34Z) - When Models Know More Than They Can Explain: Quantifying Knowledge Transfer in Human-AI Collaboration [79.69935257008467]
We introduce Knowledge Integration and Transfer Evaluation (KITE), a conceptual and experimental framework for Human-AI knowledge transfer capabilities.<n>We conduct the first large-scale human study (N=118) explicitly designed to measure it.<n>In our two-phase setup, humans first ideate with an AI on problem-solving strategies, then independently implement solutions, isolating model explanations' influence on human understanding.
arXiv Detail & Related papers (2025-06-05T20:48:16Z) - A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support [2.020765276735129]
The study aims to identify the most effective and useful explanations that enhance the diagnostic process.<n>Medical doctors filled out a survey to assess different types of explanations.
arXiv Detail & Related papers (2025-05-15T11:42:24Z) - Supporting Data-Frame Dynamics in AI-assisted Decision Making [6.4219774981192455]
High stakes decision-making requires continuous interplay between evolving evidence and shifting hypotheses.<n>We introduce a mixed-initiative framework for AI assisted decision making that is grounded in the data-frame theory of sensemaking and the evaluative AI paradigm.
arXiv Detail & Related papers (2025-04-22T13:36:06Z) - The Value of Information in Human-AI Decision-making [20.669176502049066]
We contribute a decision-theoretic framework for characterizing the value of information.<n>By defining complementary information, our approach identifies opportunities for agents to better exploit available information in AI-assisted decision.<n>We present a novel explanation technique that adapts SHAP explanations to highlight human-complementing information.
arXiv Detail & Related papers (2025-02-10T04:50:42Z) - Exploring the Requirements of Clinicians for Explainable AI Decision Support Systems in Intensive Care [1.950650243134358]
Thematic analysis revealed three core themes: (T1) ICU decision-making relies on a wide range of factors, (T2) the complexity of patient state is challenging for shared decision-making, and (T3) requirements and capabilities of AI decision support systems.
We include design recommendations from clinical input, providing insights to inform future AI systems for intensive care.
arXiv Detail & Related papers (2024-11-18T17:53:07Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Beyond Recommender: An Exploratory Study of the Effects of Different AI
Roles in AI-Assisted Decision Making [48.179458030691286]
We examine three AI roles: Recommender, Analyzer, and Devil's Advocate.
Our results show each role's distinct strengths and limitations in task performance, reliance appropriateness, and user experience.
These insights offer valuable implications for designing AI assistants with adaptive functional roles according to different situations.
arXiv Detail & Related papers (2024-03-04T07:32:28Z) - From DDMs to DNNs: Using process data and models of decision-making to improve human-AI interactions [1.024113475677323]
We argue that artificial intelligence (AI) research would benefit from a stronger focus on insights about how decisions emerge over time.<n>First, we introduce a highly established computational framework that assumes decisions to emerge from the noisy accumulation of evidence.<n>Next, we discuss to what extent current approaches in multi-agent AI do or do not incorporate process data and models of decision making.
arXiv Detail & Related papers (2023-08-29T11:27:22Z) - AI Reliance and Decision Quality: Fundamentals, Interdependence, and the Effects of Interventions [6.356355538824237]
We argue that reliance and decision quality are often inappropriately conflated in the current literature on AI-assisted decision-making.<n>Our research highlights the importance of distinguishing between reliance behavior and decision quality in AI-assisted decision-making.
arXiv Detail & Related papers (2023-04-18T08:08:05Z) - Artificial Artificial Intelligence: Measuring Influence of AI
'Assessments' on Moral Decision-Making [48.66982301902923]
We examined the effect of feedback from false AI on moral decision-making about donor kidney allocation.
We found some evidence that judgments about whether a patient should receive a kidney can be influenced by feedback about participants' own decision-making perceived to be given by AI.
arXiv Detail & Related papers (2020-01-13T14:15:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.