Confounding-Robust Policy Improvement with Human-AI Teams
- URL: http://arxiv.org/abs/2310.08824v1
- Date: Fri, 13 Oct 2023 02:39:52 GMT
- Title: Confounding-Robust Policy Improvement with Human-AI Teams
- Authors: Ruijiang Gao, Mingzhang Yin
- Abstract summary: We propose a novel solution to address unobserved confounding in human-AI collaboration by employing the marginal sensitivity model (MSM)
Our approach combines domain expertise with AI-driven statistical modeling to account for potential confounders that may otherwise remain hidden.
- Score: 9.823906892919746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human-AI collaboration has the potential to transform various domains by
leveraging the complementary strengths of human experts and Artificial
Intelligence (AI) systems. However, unobserved confounding can undermine the
effectiveness of this collaboration, leading to biased and unreliable outcomes.
In this paper, we propose a novel solution to address unobserved confounding in
human-AI collaboration by employing the marginal sensitivity model (MSM). Our
approach combines domain expertise with AI-driven statistical modeling to
account for potential confounders that may otherwise remain hidden. We present
a deferral collaboration framework for incorporating the MSM into policy
learning from observational data, enabling the system to control for the
influence of unobserved confounding factors. In addition, we propose a
personalized deferral collaboration system to leverage the diverse expertise of
different human decision-makers. By adjusting for potential biases, our
proposed solution enhances the robustness and reliability of collaborative
outcomes. The empirical and theoretical analyses demonstrate the efficacy of
our approach in mitigating unobserved confounding and improving the overall
performance of human-AI collaborations.
Related papers
- Collaborative Gym: A Framework for Enabling and Evaluating Human-Agent Collaboration [51.452664740963066]
Collaborative Gym is a framework enabling asynchronous, tripartite interaction among agents, humans, and task environments.
We instantiate Co-Gym with three representative tasks in both simulated and real-world conditions.
Our findings reveal that collaborative agents consistently outperform their fully autonomous counterparts in task performance.
arXiv Detail & Related papers (2024-12-20T09:21:15Z) - How Performance Pressure Influences AI-Assisted Decision Making [57.53469908423318]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.
Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Mutual Theory of Mind in Human-AI Collaboration: An Empirical Study with LLM-driven AI Agents in a Real-time Shared Workspace Task [56.92961847155029]
Theory of Mind (ToM) significantly impacts human collaboration and communication as a crucial capability to understand others.
Mutual Theory of Mind (MToM) arises when AI agents with ToM capability collaborate with humans.
We find that the agent's ToM capability does not significantly impact team performance but enhances human understanding of the agent.
arXiv Detail & Related papers (2024-09-13T13:19:48Z) - Evaluating Human-AI Collaboration: A Review and Methodological Framework [4.41358655687435]
The use of artificial intelligence (AI) in working environments with individuals, known as Human-AI Collaboration (HAIC), has become essential.
evaluating HAIC's effectiveness remains challenging due to the complex interaction of components involved.
This paper provides a detailed analysis of existing HAIC evaluation approaches and develops a fresh paradigm for more effectively evaluating these systems.
arXiv Detail & Related papers (2024-07-09T12:52:22Z) - Cognitive Insights and Stable Coalition Matching for Fostering Multi-Agent Cooperation [6.536780912510439]
We propose a novel matching coalition mechanism that leverages the strengths of agents with different ToM levels.
Our work demonstrates the potential of leveraging ToM to create more sophisticated and human-like coordination strategies.
arXiv Detail & Related papers (2024-05-28T10:59:33Z) - Negotiating the Shared Agency between Humans & AI in the Recommender System [1.4249472316161877]
Concerns about user agency have arisen due to the inherent opacity (information asymmetry) and the nature of one-way output (power asymmetry) on algorithms.
We seek to understand how types of agency impact user perception and experience, and bring empirical evidence to refine the guidelines and designs for human-AI interactive systems.
arXiv Detail & Related papers (2024-03-23T19:23:08Z) - Complementarity in Human-AI Collaboration: Concept, Sources, and Evidence [6.571063542099526]
We develop a concept of complementarity and formalize its theoretical potential.
We identify information and capability asymmetry as the two key sources of complementarity.
Our work provides researchers with a comprehensive theoretical foundation of human-AI complementarity in decision-making.
arXiv Detail & Related papers (2024-03-21T07:27:17Z) - A Survey on Human-AI Teaming with Large Pre-Trained Models [7.280953657497549]
Human-AI (HAI) Teaming has emerged as a cornerstone for advancing problem-solving and decision-making processes.
The advent of Large Pre-trained Models (LPtM) has significantly transformed this landscape.
It offers unprecedented capabilities by leveraging vast amounts of data to understand and predict complex patterns.
arXiv Detail & Related papers (2024-03-07T22:37:49Z) - Large Language Model-based Human-Agent Collaboration for Complex Task
Solving [94.3914058341565]
We introduce the problem of Large Language Models (LLMs)-based human-agent collaboration for complex task-solving.
We propose a Reinforcement Learning-based Human-Agent Collaboration method, ReHAC.
This approach includes a policy model designed to determine the most opportune stages for human intervention within the task-solving process.
arXiv Detail & Related papers (2024-02-20T11:03:36Z) - Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - Towards Effective Human-AI Decision-Making: The Role of Human Learning
in Appropriate Reliance on AI Advice [3.595471754135419]
We show the relationship between learning and appropriate reliance in an experiment with 100 participants.
This work provides fundamental concepts for analyzing reliance and derives implications for the effective design of human-AI decision-making.
arXiv Detail & Related papers (2023-10-03T14:51:53Z) - The Impact of Imperfect XAI on Human-AI Decision-Making [8.305869611846775]
We evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task.
Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance.
arXiv Detail & Related papers (2023-07-25T15:19:36Z) - PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI
Coordination [52.991211077362586]
We propose a policy ensemble method to increase the diversity of partners in the population.
We then develop a context-aware method enabling the ego agent to analyze and identify the partner's potential policy primitives.
In this way, the ego agent is able to learn more universal cooperative behaviors for collaborating with diverse partners.
arXiv Detail & Related papers (2023-01-16T12:14:58Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted
Decision-making [46.625616262738404]
We use knowledge from the field of cognitive science to account for cognitive biases in the human-AI collaborative decision-making setting.
We focus specifically on anchoring bias, a bias commonly encountered in human-AI collaboration.
arXiv Detail & Related papers (2020-10-15T22:25:41Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.