Human-AI Collaboration in Decision-Making: Beyond Learning to Defer
- URL: http://arxiv.org/abs/2206.13202v1
- Date: Mon, 27 Jun 2022 11:40:55 GMT
- Title: Human-AI Collaboration in Decision-Making: Beyond Learning to Defer
- Authors: Diogo Leit\~ao, Pedro Saleiro, M\'ario A.T. Figueiredo, Pedro Bizarro
- Abstract summary: Human-AI collaboration (HAIC) in decision-making aims to create synergistic teaming between humans and AI systems.
Learning to Defer (L2D) has been presented as a promising framework to determine who among humans and AI should take which decisions.
L2D entails several often unfeasible requirements, such as availability of predictions from humans for every instance or ground-truth labels independent from said decision-makers.
- Score: 4.874780144224057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human-AI collaboration (HAIC) in decision-making aims to create synergistic
teaming between human decision-makers and AI systems. Learning to Defer (L2D)
has been presented as a promising framework to determine who among humans and
AI should take which decisions in order to optimize the performance and
fairness of the combined system. Nevertheless, L2D entails several often
unfeasible requirements, such as the availability of predictions from humans
for every instance or ground-truth labels independent from said
decision-makers. Furthermore, neither L2D nor alternative approaches tackle
fundamental issues of deploying HAIC in real-world settings, such as capacity
management or dealing with dynamic environments. In this paper, we aim to
identify and review these and other limitations, pointing to where
opportunities for future research in HAIC may lie.
Related papers
- Coverage-Constrained Human-AI Cooperation with Multiple Experts [21.247853435529446]
We propose the Coverage-constrained Learning to Defer and Complement with Specific Experts (CL2DC) method.
CL2DC makes final decisions through either AI prediction alone or by deferring to or complementing a specific expert.
It achieves superior performance compared to state-of-the-art HAI-CC methods.
arXiv Detail & Related papers (2024-11-18T19:06:01Z) - Problem Solving Through Human-AI Preference-Based Cooperation [74.39233146428492]
We propose HAI-Co2, a novel human-AI co-construction framework.
We formalize HAI-Co2 and discuss the difficult open research problems that it faces.
We present a case study of HAI-Co2 and demonstrate its efficacy compared to monolithic generative AI models.
arXiv Detail & Related papers (2024-08-14T11:06:57Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - Towards Optimizing Human-Centric Objectives in AI-Assisted Decision-Making With Offline Reinforcement Learning [10.08973043408929]
offline reinforcement learning (RL) as a general approach for modeling human-AI decision-making.
We show that people interacting with policies optimized for accuracy achieve significantly better accuracy than those interacting with any other type of AI support.
arXiv Detail & Related papers (2024-03-09T13:30:00Z) - A Decision Theoretic Framework for Measuring AI Reliance [23.353778024330165]
Humans frequently make decisions with the aid of artificially intelligent (AI) systems.
Researchers have identified ensuring that a human has appropriate reliance on an AI as a critical component of achieving complementary performance.
We propose a formal definition of reliance, based on statistical decision theory, which separates the concepts of reliance as the probability the decision-maker follows the AI's recommendation.
arXiv Detail & Related papers (2024-01-27T09:13:09Z) - A2C: A Modular Multi-stage Collaborative Decision Framework for Human-AI
Teams [19.91751748232295]
A2C is a multi-stage collaborative decision framework designed to enable robust decision-making within human-AI teams.
It incorporates AI systems trained to recognise uncertainty in their decisions and defer to human experts when needed.
arXiv Detail & Related papers (2024-01-25T02:31:52Z) - Decision-Oriented Dialogue for Human-AI Collaboration [62.367222979251444]
We describe a class of tasks called decision-oriented dialogues, in which AI assistants such as large language models (LMs) must collaborate with one or more humans via natural language to help them make complex decisions.
We formalize three domains in which users face everyday decisions: (1) choosing an assignment of reviewers to conference papers, (2) planning a multi-step itinerary in a city, and (3) negotiating travel plans for a group of friends.
For each task, we build a dialogue environment where agents receive a reward based on the quality of the final decision they reach.
arXiv Detail & Related papers (2023-05-31T17:50:02Z) - Learning Complementary Policies for Human-AI Teams [22.13683008398939]
We propose a framework for a novel human-AI collaboration for selecting advantageous course of action.
Our solution aims to exploit the human-AI complementarity to maximize decision rewards.
arXiv Detail & Related papers (2023-02-06T17:22:18Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.