AI as Teammate or Tool? A Review of Human-AI Interaction in Decision Support
- URL: http://arxiv.org/abs/2602.15865v1
- Date: Mon, 26 Jan 2026 19:18:50 GMT
- Title: AI as Teammate or Tool? A Review of Human-AI Interaction in Decision Support
- Authors: Most. Sharmin Sultana Samu, Nafisa Khan, Kazi Toufique Elahi, Tasnuva Binte Rahman, Md. Rakibul Islam, Farig Sadeque,
- Abstract summary: Current AI systems remain largely passive due to an overreliance on explainability-centric designs.<n> transitioning AI to an active teammate requires adaptive, context-aware interactions.
- Score: 0.514825619161626
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The integration of Artificial Intelligence (AI) necessitates determining whether systems function as tools or collaborative teammates. In this study, by synthesizing Human-AI Interaction (HAI) literature, we analyze this distinction across four dimensions: interaction design, trust calibration, collaborative frameworks and healthcare applications. Our analysis reveals that static interfaces and miscalibrated trust limit AI efficacy. Performance hinges on aligning transparency with cognitive workflows, yet a fluency trap often inflates trust without improving decision-making. Consequently, an overemphasis on explainability leaves systems largely passive. Our findings show that current AI systems remain largely passive due to an overreliance on explainability-centric designs and that transitioning AI to an active teammate requires adaptive, context-aware interactions that support shared mental models and the dynamic negotiation of authority between humans and AI.
Related papers
- From Augmentation to Symbiosis: A Review of Human-AI Collaboration Frameworks, Performance, and Perils [0.8629912408966147]
Human-Centered AI's supertool" and Symbiotic Intelligence's mutual-adaptation model are studied.<n>We conclude with a unifying framework--combining extended-self and dual-process theories--arguing that durable gains arise when AI functions as an internalized cognitive component.
arXiv Detail & Related papers (2025-11-07T19:11:33Z) - Explanation-Driven Interventions for Artificial Intelligence Model Customization: Empowering End-Users to Tailor Black-Box AI in Rhinocytology [3.4705962607086973]
This article presents a novel End-User Development (EUD) approach for black-box AI models.<n>The proposed method advances Human-Centered AI (HCAI), promoting a symbiotic relationship between humans and adaptive, user-tailored AI systems.
arXiv Detail & Related papers (2025-04-07T08:44:48Z) - Engaging with AI: How Interface Design Shapes Human-AI Collaboration in High-Stakes Decision-Making [8.948482790298645]
We examine how various decision-support mechanisms impact user engagement, trust, and human-AI collaborative task performance.<n>Our findings reveal that mechanisms like AI confidence levels, text explanations, and performance visualizations enhanced human-AI collaborative task performance.
arXiv Detail & Related papers (2025-01-28T02:03:00Z) - The AI Interface: Designing for the Ideal Machine-Human Experience (Editorial) [1.8074330674710588]
This editorial introduces a Special Issue that explores the psychology of AI experience design.<n>Papers in this collection highlight the complexities of trust, transparency, and emotional sensitivity in human-AI interaction.<n>By findings from eight diverse studies, this editorial underscores the need for AI interfaces to balance efficiency with empathy.
arXiv Detail & Related papers (2024-11-29T15:17:32Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
AI technology can support humans in scientific discovery and forming decisions, but may also disrupt democracies and target individuals.<n>The responsible use of AI and its participation in human-AI teams increasingly shows the need for AI alignment.<n>A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review [6.013543974938446]
Leveraging Artificial Intelligence in decision support systems has disproportionately focused on technological advancements.
A human-centered perspective attempts to alleviate this concern by designing AI solutions for seamless integration with existing processes.
arXiv Detail & Related papers (2023-10-30T17:46:38Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.