Understanding Mode Switching in Human-AI Collaboration: Behavioral Insights and Predictive Modeling
- URL: http://arxiv.org/abs/2509.20666v1
- Date: Thu, 25 Sep 2025 01:58:46 GMT
- Title: Understanding Mode Switching in Human-AI Collaboration: Behavioral Insights and Predictive Modeling
- Authors: Avinash Ajit Nargund, Arthur Caetano, Kevin Yang, Rose Yiwei Liu, Philip Tezaur, Kriteen Shrestha, Qisen Pan, Tobias Höllerer, Misha Sra,
- Abstract summary: We investigate how users dynamically switch between higher and lower levels of control during a sequential decision-making task.<n>We collect over 400 mode-switching decisions from eight participants, along with gaze, emotional state, and subtask difficulty data.<n>We train a lightweight model that predicted control level switches.
- Score: 16.20562194559668
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human-AI collaboration is typically offered in one of two of user control levels: guidance, where the AI provides suggestions and the human makes the final decision, and delegation, where the AI acts autonomously within user-defined constraints. Systems that integrate both modes, common in robotic surgery or driving assistance, often overlook shifts in user preferences within a task in response to factors like evolving trust, decision complexity, and perceived control. In this work, we investigate how users dynamically switch between higher and lower levels of control during a sequential decision-making task. Using a hand-and-brain chess setup, participants either selected a piece and the AI decided how it moved (brain mode), or the AI selected a piece and the participant decided how it moved (hand mode). We collected over 400 mode-switching decisions from eight participants, along with gaze, emotional state, and subtask difficulty data. Statistical analysis revealed significant differences in gaze patterns and subtask complexity prior to a switch and in the quality of the subsequent move. Based on these results, we engineered behavioral and task-specific features to train a lightweight model that predicted control level switches ($F1 = 0.65$). The model performance suggests that real-time behavioral signals can serve as a complementary input alongside system-driven mode-switching mechanisms currently used. We complement our quantitative results with qualitative factors that influence switching including perceived AI ability, decision complexity, and level of control, identified from post-game interview analysis. The combined behavioral and modeling insights can help inform the design of shared autonomy systems that need dynamic, subtask-level control switches aligned with user intent and evolving task demands.
Related papers
- AI as Teammate or Tool? A Review of Human-AI Interaction in Decision Support [0.514825619161626]
Current AI systems remain largely passive due to an overreliance on explainability-centric designs.<n> transitioning AI to an active teammate requires adaptive, context-aware interactions.
arXiv Detail & Related papers (2026-01-26T19:18:50Z) - Human Cognitive Biases in Explanation-Based Interaction: The Case of Within and Between Session Order Effect [46.80756527630539]
Explanatory Interactive Learning (XIL) is a powerful interactive learning framework designed to enable users to customize and correct AI models by interacting with their explanations.<n>Recent studies have raised concerns that explanatory interaction may trigger order effects, a well-known cognitive bias in which the sequence of presented items influences users' trust and, critically, the quality of their feedback.<n>To clarify the interplay between order effects and explanatory interaction, we ran two larger-scale user studies designed to mimic common XIL tasks.
arXiv Detail & Related papers (2025-12-04T12:59:54Z) - When Models Know More Than They Can Explain: Quantifying Knowledge Transfer in Human-AI Collaboration [79.69935257008467]
We introduce Knowledge Integration and Transfer Evaluation (KITE), a conceptual and experimental framework for Human-AI knowledge transfer capabilities.<n>We conduct the first large-scale human study (N=118) explicitly designed to measure it.<n>In our two-phase setup, humans first ideate with an AI on problem-solving strategies, then independently implement solutions, isolating model explanations' influence on human understanding.
arXiv Detail & Related papers (2025-06-05T20:48:16Z) - Fine-Grained Appropriate Reliance: Human-AI Collaboration with a Multi-Step Transparent Decision Workflow for Complex Task Decomposition [14.413413322901409]
We propose to investigate the impact of a novel Multi-Step Transparent (MST) decision workflow on user reliance behaviors.<n>Our findings demonstrate that human-AI collaboration with an MST decision workflow can outperform one-step collaboration in specific contexts.<n>Our work highlights that there is no one-size-fits-all decision workflow that can help obtain optimal human-AI collaboration.
arXiv Detail & Related papers (2025-01-19T01:03:09Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Human Delegation Behavior in Human-AI Collaboration: The Effect of Contextual Information [7.475784495279183]
One promising approach to leverage existing complementary capabilities is allowing humans to delegate individual instances of decision tasks to AI.<n>We conduct a behavioral study to explore the effects of providing contextual information to support this delegation decision.<n>Our findings reveal that access to contextual information significantly improves human-AI team performance in delegation settings.
arXiv Detail & Related papers (2024-01-09T18:59:47Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Human operator cognitive availability aware Mixed-Initiative control [1.155258942346793]
This paper presents a Cognitive Availability Aware Mixed-Initiative Controller for remotely operated mobile robots.
The controller enables dynamic switching between different levels of autonomy (LOA), initiated by either the AI or the human operator.
The controller is evaluated in a disaster response experiment, in which human operators have to conduct an exploration task with a remote robot.
arXiv Detail & Related papers (2021-08-26T16:21:56Z) - Data-driven Koopman Operators for Model-based Shared Control of
Human-Machine Systems [66.65503164312705]
We present a data-driven shared control algorithm that can be used to improve a human operator's control of complex machines.
Both the dynamics and information about the user's interaction are learned from observation through the use of a Koopman operator.
We find that model-based shared control significantly improves task and control metrics when compared to a natural learning, or user only, control paradigm.
arXiv Detail & Related papers (2020-06-12T14:14:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.