Learning Complementary Policies for Human-AI Teams
- URL: http://arxiv.org/abs/2302.02944v1
- Date: Mon, 6 Feb 2023 17:22:18 GMT
- Title: Learning Complementary Policies for Human-AI Teams
- Authors: Ruijiang Gao, Maytal Saar-Tsechansky, Maria De-Arteaga, Ligong Han,
Wei Sun, Min Kyung Lee, Matthew Lease
- Abstract summary: We propose a framework for a novel human-AI collaboration for selecting advantageous course of action.
Our solution aims to exploit the human-AI complementarity to maximize decision rewards.
- Score: 22.13683008398939
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human-AI complementarity is important when neither the algorithm nor the
human yields dominant performance across all instances in a given context.
Recent work that explored human-AI collaboration has considered decisions that
correspond to classification tasks. However, in many important contexts where
humans can benefit from AI complementarity, humans undertake course of action.
In this paper, we propose a framework for a novel human-AI collaboration for
selecting advantageous course of action, which we refer to as Learning
Complementary Policy for Human-AI teams (\textsc{lcp-hai}). Our solution aims
to exploit the human-AI complementarity to maximize decision rewards by
learning both an algorithmic policy that aims to complement humans by a routing
model that defers decisions to either a human or the AI to leverage the
resulting complementarity. We then extend our approach to leverage
opportunities and mitigate risks that arise in important contexts in practice:
1) when a team is composed of multiple humans with differential and potentially
complementary abilities, 2) when the observational data includes consistent
deterministic actions, and 3) when the covariate distribution of future
decisions differ from that in the historical data. We demonstrate the
effectiveness of our proposed methods using data on real human responses and
semi-synthetic, and find that our methods offer reliable and advantageous
performance across setting, and that it is superior to when either the
algorithm or the AI make decisions on their own. We also find that the
extensions we propose effectively improve the robustness of the human-AI
collaboration performance in the presence of different challenging settings.
Related papers
- Unexploited Information Value in Human-AI Collaboration [23.353778024330165]
How to improve performance of a human-AI team is often not clear without knowing what particular information and strategies each agent employs.
We propose a model based in statistically decision theory to analyze human-AI collaboration.
arXiv Detail & Related papers (2024-11-03T01:34:45Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Towards Optimizing Human-Centric Objectives in AI-Assisted Decision-Making With Offline Reinforcement Learning [10.08973043408929]
offline reinforcement learning (RL) as a general approach for modeling human-AI decision-making.
We show that people interacting with policies optimized for accuracy achieve significantly better accuracy than those interacting with any other type of AI support.
arXiv Detail & Related papers (2024-03-09T13:30:00Z) - On the Effect of Contextual Information on Human Delegation Behavior in
Human-AI collaboration [3.9253315480927964]
We study the effects of providing contextual information on human decisions to delegate instances to an AI.
We find that providing participants with contextual information significantly improves the human-AI team performance.
This research advances the understanding of human-AI interaction in human delegation and provides actionable insights for designing more effective collaborative systems.
arXiv Detail & Related papers (2024-01-09T18:59:47Z) - Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - Towards Effective Human-AI Decision-Making: The Role of Human Learning
in Appropriate Reliance on AI Advice [3.595471754135419]
We show the relationship between learning and appropriate reliance in an experiment with 100 participants.
This work provides fundamental concepts for analyzing reliance and derives implications for the effective design of human-AI decision-making.
arXiv Detail & Related papers (2023-10-03T14:51:53Z) - PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI
Coordination [52.991211077362586]
We propose a policy ensemble method to increase the diversity of partners in the population.
We then develop a context-aware method enabling the ego agent to analyze and identify the partner's potential policy primitives.
In this way, the ego agent is able to learn more universal cooperative behaviors for collaborating with diverse partners.
arXiv Detail & Related papers (2023-01-16T12:14:58Z) - Blessing from Human-AI Interaction: Super Reinforcement Learning in
Confounded Environments [19.944163846660498]
We introduce the paradigm of super reinforcement learning that takes advantage of Human-AI interaction for data driven sequential decision making.
In the decision process with unmeasured confounding, the actions taken by past agents can offer valuable insights into undisclosed information.
We develop several super-policy learning algorithms and systematically study their theoretical properties.
arXiv Detail & Related papers (2022-09-29T16:03:07Z) - Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted
Decision-making [46.625616262738404]
We use knowledge from the field of cognitive science to account for cognitive biases in the human-AI collaborative decision-making setting.
We focus specifically on anchoring bias, a bias commonly encountered in human-AI collaboration.
arXiv Detail & Related papers (2020-10-15T22:25:41Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.