Confounding-Robust Policy Improvement with Human-AI Teams
- URL: http://arxiv.org/abs/2310.08824v1
- Date: Fri, 13 Oct 2023 02:39:52 GMT
- Title: Confounding-Robust Policy Improvement with Human-AI Teams
- Authors: Ruijiang Gao, Mingzhang Yin
- Abstract summary: We propose a novel solution to address unobserved confounding in human-AI collaboration by employing the marginal sensitivity model (MSM)
Our approach combines domain expertise with AI-driven statistical modeling to account for potential confounders that may otherwise remain hidden.
- Score: 9.823906892919746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human-AI collaboration has the potential to transform various domains by
leveraging the complementary strengths of human experts and Artificial
Intelligence (AI) systems. However, unobserved confounding can undermine the
effectiveness of this collaboration, leading to biased and unreliable outcomes.
In this paper, we propose a novel solution to address unobserved confounding in
human-AI collaboration by employing the marginal sensitivity model (MSM). Our
approach combines domain expertise with AI-driven statistical modeling to
account for potential confounders that may otherwise remain hidden. We present
a deferral collaboration framework for incorporating the MSM into policy
learning from observational data, enabling the system to control for the
influence of unobserved confounding factors. In addition, we propose a
personalized deferral collaboration system to leverage the diverse expertise of
different human decision-makers. By adjusting for potential biases, our
proposed solution enhances the robustness and reliability of collaborative
outcomes. The empirical and theoretical analyses demonstrate the efficacy of
our approach in mitigating unobserved confounding and improving the overall
performance of human-AI collaborations.
Related papers
- Mutual Theory of Mind in Human-AI Collaboration: An Empirical Study with LLM-driven AI Agents in a Real-time Shared Workspace Task [56.92961847155029]
Theory of Mind (ToM) significantly impacts human collaboration and communication as a crucial capability to understand others.
Mutual Theory of Mind (MToM) arises when AI agents with ToM capability collaborate with humans.
We find that the agent's ToM capability does not significantly impact team performance but enhances human understanding of the agent.
arXiv Detail & Related papers (2024-09-13T13:19:48Z) - Evaluating Human-AI Collaboration: A Review and Methodological Framework [4.41358655687435]
The use of artificial intelligence (AI) in working environments with individuals, known as Human-AI Collaboration (HAIC), has become essential.
evaluating HAIC's effectiveness remains challenging due to the complex interaction of components involved.
This paper provides a detailed analysis of existing HAIC evaluation approaches and develops a fresh paradigm for more effectively evaluating these systems.
arXiv Detail & Related papers (2024-07-09T12:52:22Z) - Cognitive Insights and Stable Coalition Matching for Fostering Multi-Agent Cooperation [6.536780912510439]
We propose a novel matching coalition mechanism that leverages the strengths of agents with different ToM levels.
Our work demonstrates the potential of leveraging ToM to create more sophisticated and human-like coordination strategies.
arXiv Detail & Related papers (2024-05-28T10:59:33Z) - Complementarity in Human-AI Collaboration: Concept, Sources, and Evidence [6.571063542099526]
We develop a concept of complementarity and formalize its theoretical potential.
We identify information and capability asymmetry as the two key sources of complementarity.
Our work provides researchers with a comprehensive theoretical foundation of human-AI complementarity in decision-making.
arXiv Detail & Related papers (2024-03-21T07:27:17Z) - A Survey on Human-AI Teaming with Large Pre-Trained Models [7.280953657497549]
Human-AI (HAI) Teaming has emerged as a cornerstone for advancing problem-solving and decision-making processes.
The advent of Large Pre-trained Models (LPtM) has significantly transformed this landscape.
It offers unprecedented capabilities by leveraging vast amounts of data to understand and predict complex patterns.
arXiv Detail & Related papers (2024-03-07T22:37:49Z) - Large Language Model-based Human-Agent Collaboration for Complex Task
Solving [94.3914058341565]
We introduce the problem of Large Language Models (LLMs)-based human-agent collaboration for complex task-solving.
We propose a Reinforcement Learning-based Human-Agent Collaboration method, ReHAC.
This approach includes a policy model designed to determine the most opportune stages for human intervention within the task-solving process.
arXiv Detail & Related papers (2024-02-20T11:03:36Z) - Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - The Impact of Imperfect XAI on Human-AI Decision-Making [8.305869611846775]
We evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task.
Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance.
arXiv Detail & Related papers (2023-07-25T15:19:36Z) - PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI
Coordination [52.991211077362586]
We propose a policy ensemble method to increase the diversity of partners in the population.
We then develop a context-aware method enabling the ego agent to analyze and identify the partner's potential policy primitives.
In this way, the ego agent is able to learn more universal cooperative behaviors for collaborating with diverse partners.
arXiv Detail & Related papers (2023-01-16T12:14:58Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted
Decision-making [46.625616262738404]
We use knowledge from the field of cognitive science to account for cognitive biases in the human-AI collaborative decision-making setting.
We focus specifically on anchoring bias, a bias commonly encountered in human-AI collaboration.
arXiv Detail & Related papers (2020-10-15T22:25:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.