Adaptive XAI in High Stakes Environments: Modeling Swift Trust with Multimodal Feedback in Human AI Teams
- URL: http://arxiv.org/abs/2507.21158v1
- Date: Fri, 25 Jul 2025 01:39:55 GMT
- Title: Adaptive XAI in High Stakes Environments: Modeling Swift Trust with Multimodal Feedback in Human AI Teams
- Authors: Nishani Fernando, Bahareh Nakisa, Adnan Ahmad, Mohammad Naim Rastgoo,
- Abstract summary: We propose a conceptual framework for adaptive XAI that operates non-intrusively by responding to users' real-time cognitive and emotional states.<n>At its core is a multi-objective, personalized trust estimation model that maps workload, stress, and emotion to dynamic trust estimates.
- Score: 2.9629704451989802
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Effective human-AI teaming heavily depends on swift trust, particularly in high-stakes scenarios such as emergency response, where timely and accurate decision-making is critical. In these time-sensitive and cognitively demanding settings, adaptive explainability is essential for fostering trust between human operators and AI systems. However, existing explainable AI (XAI) approaches typically offer uniform explanations and rely heavily on explicit feedback mechanisms, which are often impractical in such high-pressure scenarios. To address this gap, we propose a conceptual framework for adaptive XAI that operates non-intrusively by responding to users' real-time cognitive and emotional states through implicit feedback, thereby enhancing swift trust in high-stakes environments. The proposed adaptive explainability trust framework (AXTF) leverages physiological and behavioral signals, such as EEG, ECG, and eye tracking, to infer user states and support explanation adaptation. At its core is a multi-objective, personalized trust estimation model that maps workload, stress, and emotion to dynamic trust estimates. These estimates guide the modulation of explanation features enabling responsive and personalized support that promotes swift trust in human-AI collaboration. This conceptual framework establishes a foundation for developing adaptive, non-intrusive XAI systems tailored to the rigorous demands of high-pressure, time-sensitive environments.
Related papers
- VEXA: Evidence-Grounded and Persona-Adaptive Explanations for Scam Risk Sensemaking [9.22587207148122]
Online scams across email, short message services, and social media increasingly challenge everyday risk assessment.<n>We propose VEXA, an evidence-grounded and persona-adaptive framework for generating learner-facing scam explanations.
arXiv Detail & Related papers (2026-02-04T21:16:24Z) - From Passive Metric to Active Signal: The Evolving Role of Uncertainty Quantification in Large Language Models [77.04403907729738]
This survey charts the evolution of uncertainty from a passive diagnostic metric to an active control signal guiding real-time model behavior.<n>We demonstrate how uncertainty is leveraged as an active control signal across three frontiers.<n>This survey argues that mastering the new trend of uncertainty is essential for building the next generation of scalable, reliable, and trustworthy AI.
arXiv Detail & Related papers (2026-01-22T06:21:31Z) - TCEval: Using Thermal Comfort to Assess Cognitive and Perceptual Abilities of AI [0.5366500153474746]
Thermal comfort serves as an ideal paradigm for evaluating real-world cognitive capabilities of AI systems.<n>We propose TCEval, the first evaluation framework that assesses three core cognitive capacities of AI.
arXiv Detail & Related papers (2025-12-29T05:41:25Z) - Planning Ahead with RSA: Efficient Signalling in Dynamic Environments by Projecting User Awareness across Future Timesteps [19.242065209157854]
We introduce a theoretical framework for adaptive signalling using the Rational Speech Act (RSA) modelling framework.<n>We show that this effectiveness depends crucially on combining multi-step planning with a realistic model of user awareness.
arXiv Detail & Related papers (2025-10-27T13:54:54Z) - DeceptionBench: A Comprehensive Benchmark for AI Deception Behaviors in Real-world Scenarios [57.327907850766785]
characterization of deception across realistic real-world scenarios remains underexplored.<n>We establish DeceptionBench, the first benchmark that systematically evaluates how deceptive tendencies manifest across different domains.<n>On the intrinsic dimension, we explore whether models exhibit self-interested egoistic tendencies or sycophantic behaviors that prioritize user appeasement.<n>We incorporate sustained multi-turn interaction loops to construct a more realistic simulation of real-world feedback dynamics.
arXiv Detail & Related papers (2025-10-17T10:14:26Z) - Moral Anchor System: A Predictive Framework for AI Value Alignment and Drift Prevention [0.0]
Key risk is value drift, where AI systems deviate from aligned values due to evolving contexts, learning dynamics, or unintended optimizations.<n>We propose the Moral Anchor System (MAS), a novel framework to detect, predict, and mitigate value drift in AI agents.
arXiv Detail & Related papers (2025-10-05T07:24:23Z) - Adaptive and Resource-efficient Agentic AI Systems for Mobile and Embedded Devices: A Survey [11.537225726120495]
Foundation models have reshaped AI by unifying fragmented architectures into scalable backbones with multimodal reasoning and contextual adaptation.<n>With FMs as their cognitive core, agents transcend rule-based behaviors to achieve autonomy, generalization, and self-reflection.<n>This survey provides the first systematic characterization of adaptive, resource-efficient agentic AI systems.
arXiv Detail & Related papers (2025-09-30T02:37:52Z) - STARec: An Efficient Agent Framework for Recommender Systems via Autonomous Deliberate Reasoning [54.28691219536054]
We introduce STARec, a slow-thinking augmented agent framework that endows recommender systems with autonomous deliberative reasoning capabilities.<n>We develop anchored reinforcement training - a two-stage paradigm combining structured knowledge distillation from advanced reasoning models with preference-aligned reward shaping.<n>Experiments on MovieLens 1M and Amazon CDs benchmarks demonstrate that STARec achieves substantial performance gains compared with state-of-the-art baselines.
arXiv Detail & Related papers (2025-08-26T08:47:58Z) - MetAdv: A Unified and Interactive Adversarial Testing Platform for Autonomous Driving [85.04826012938642]
MetAdv is a novel adversarial testing platform that enables realistic, dynamic, and interactive evaluation.<n>It supports flexible 3D vehicle modeling and seamless transitions between simulated and physical environments.<n>It enables real-time capture of physiological signals and behavioral feedback from drivers.
arXiv Detail & Related papers (2025-08-04T03:07:54Z) - GGBond: Growing Graph-Based AI-Agent Society for Socially-Aware Recommender Simulation [2.7083394633019973]
We propose a high-fidelity social simulation platform to realistically simulate user behavior evolution under recommendation interventions.<n>The system comprises a population of Sim-User Agents equipped with a five-layer cognitive architecture that encapsulates key psychological mechanisms.<n>In particular, we introduce the Intimacy--Curiosity--Reciprocity--Risk (ICR2) motivational engine grounded in psychological and sociological theories.
arXiv Detail & Related papers (2025-05-27T13:09:21Z) - Confidence-Regulated Generative Diffusion Models for Reliable AI Agent Migration in Vehicular Metaverses [55.70043755630583]
vehicular AI agents are endowed with environment perception, decision-making, and action execution capabilities.<n>We propose a reliable vehicular AI agent migration framework, achieving reliable dynamic migration and efficient resource scheduling.<n>We develop a Confidence-regulated Generative Diffusion Model (CGDM) to efficiently generate AI agent migration decisions.
arXiv Detail & Related papers (2025-05-19T05:04:48Z) - Navigating the State of Cognitive Flow: Context-Aware AI Interventions for Effective Reasoning Support [6.758533259752144]
Flow theory describes an optimal cognitive state where individuals experience deep focus and intrinsic motivation.<n>In AI-augmented reasoning, interventions that disrupt the state of cognitive flow can hinder rather than enhance decision-making.<n>This paper proposes a context-aware cognitive augmentation framework that adapts interventions based on type, timing, and scale.
arXiv Detail & Related papers (2025-04-22T16:35:39Z) - A biologically Inspired Trust Model for Open Multi-Agent Systems that is Resilient to Rapid Performance Fluctuations [0.0]
Existing trust models face challenges related to agent mobility, changing behaviors, and the cold start problem.<n>We introduce a biologically inspired trust model in which trustees assess their own capabilities and store trust data locally.<n>This design improves mobility support, reduces communication overhead, resists disinformation, and preserves privacy.
arXiv Detail & Related papers (2025-04-17T08:21:54Z) - A Multi-Layered Research Framework for Human-Centered AI: Defining the Path to Explainability and Trust [2.4578723416255754]
Human-Centered AI (HCAI) emphasizes alignment with human values, while Explainable AI (XAI) enhances transparency by making AI decisions more understandable.<n>This paper presents a novel three-layered framework that bridges HCAI and XAI to establish a structured explainability paradigm.<n>Our findings advance Human-Centered Explainable AI (HCXAI), fostering AI systems that are transparent, adaptable, and ethically aligned.
arXiv Detail & Related papers (2025-04-14T01:29:30Z) - Teleology-Driven Affective Computing: A Causal Framework for Sustained Well-Being [0.1636303041090359]
We propose a teleology-driven affective computing framework that unifies major emotion theories.<n>We advocate for creating a "dataverse" of personal affective events.<n>We introduce a meta-reinforcement learning paradigm to train agents in simulated environments.
arXiv Detail & Related papers (2025-02-24T14:07:53Z) - Visual Agents as Fast and Slow Thinkers [88.1404921693082]
We introduce FaST, which incorporates the Fast and Slow Thinking mechanism into visual agents.<n>FaST employs a switch adapter to dynamically select between System 1/2 modes.<n>It tackles uncertain and unseen objects by adjusting model confidence and integrating new contextual data.
arXiv Detail & Related papers (2024-08-16T17:44:02Z) - Multi-Agent Dynamic Relational Reasoning for Social Robot Navigation [50.01551945190676]
Social robot navigation can be helpful in various contexts of daily life but requires safe human-robot interactions and efficient trajectory planning.
We propose a systematic relational reasoning approach with explicit inference of the underlying dynamically evolving relational structures.
We demonstrate its effectiveness for multi-agent trajectory prediction and social robot navigation.
arXiv Detail & Related papers (2024-01-22T18:58:22Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Joint Sensing, Communication, and AI: A Trifecta for Resilient THz User
Experiences [118.91584633024907]
A novel joint sensing, communication, and artificial intelligence (AI) framework is proposed so as to optimize extended reality (XR) experiences over terahertz (THz) wireless systems.
arXiv Detail & Related papers (2023-04-29T00:39:50Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.