A2C: A Modular Multi-stage Collaborative Decision Framework for Human-AI
Teams
- URL: http://arxiv.org/abs/2401.14432v1
- Date: Thu, 25 Jan 2024 02:31:52 GMT
- Title: A2C: A Modular Multi-stage Collaborative Decision Framework for Human-AI
Teams
- Authors: Shahroz Tariq, Mohan Baruwal Chhetri, Surya Nepal, Cecile Paris
- Abstract summary: A2C is a multi-stage collaborative decision framework designed to enable robust decision-making within human-AI teams.
It incorporates AI systems trained to recognise uncertainty in their decisions and defer to human experts when needed.
- Score: 19.91751748232295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces A2C, a multi-stage collaborative decision framework
designed to enable robust decision-making within human-AI teams. Drawing
inspiration from concepts such as rejection learning and learning to defer, A2C
incorporates AI systems trained to recognise uncertainty in their decisions and
defer to human experts when needed. Moreover, A2C caters to scenarios where
even human experts encounter limitations, such as in incident detection and
response in cyber Security Operations Centres (SOC). In such scenarios, A2C
facilitates collaborative explorations, enabling collective resolution of
complex challenges. With support for three distinct decision-making modes in
human-AI teams: Automated, Augmented, and Collaborative, A2C offers a flexible
platform for developing effective strategies for human-AI collaboration. By
harnessing the strengths of both humans and AI, it significantly improves the
efficiency and effectiveness of complex decision-making in dynamic and evolving
environments. To validate A2C's capabilities, we conducted extensive simulative
experiments using benchmark datasets. The results clearly demonstrate that all
three modes of decision-making can be effectively supported by A2C. Most
notably, collaborative exploration by (simulated) human experts and AI achieves
superior performance compared to AI in isolation, underscoring the framework's
potential to enhance decision-making within human-AI teams.
Related papers
- Problem Solving Through Human-AI Preference-Based Cooperation [74.39233146428492]
We propose HAI-Co2, a novel human-AI co-construction framework.
We formalize HAI-Co2 and discuss the difficult open research problems that it faces.
We present a case study of HAI-Co2 and demonstrate its efficacy compared to monolithic generative AI models.
arXiv Detail & Related papers (2024-08-14T11:06:57Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - Beyond Recommender: An Exploratory Study of the Effects of Different AI
Roles in AI-Assisted Decision Making [48.179458030691286]
We examine three AI roles: Recommender, Analyzer, and Devil's Advocate.
Our results show each role's distinct strengths and limitations in task performance, reliance appropriateness, and user experience.
These insights offer valuable implications for designing AI assistants with adaptive functional roles according to different situations.
arXiv Detail & Related papers (2024-03-04T07:32:28Z) - Scalable Interactive Machine Learning for Future Command and Control [1.762977457426215]
Future warfare will require Command and Control (C2) personnel to make decisions at shrinking timescales.
integration of artificial and human intelligence holds the potential to revolutionize the C2 operations process.
This paper identifies several gaps in state-of-the-art science and technology that future work should address to extend these approaches to function in complex C2 contexts.
arXiv Detail & Related papers (2024-02-09T16:11:04Z) - Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - Learning Complementary Policies for Human-AI Teams [22.13683008398939]
We propose a framework for a novel human-AI collaboration for selecting advantageous course of action.
Our solution aims to exploit the human-AI complementarity to maximize decision rewards.
arXiv Detail & Related papers (2023-02-06T17:22:18Z) - Human-AI Collaboration in Decision-Making: Beyond Learning to Defer [4.874780144224057]
Human-AI collaboration (HAIC) in decision-making aims to create synergistic teaming between humans and AI systems.
Learning to Defer (L2D) has been presented as a promising framework to determine who among humans and AI should take which decisions.
L2D entails several often unfeasible requirements, such as availability of predictions from humans for every instance or ground-truth labels independent from said decision-makers.
arXiv Detail & Related papers (2022-06-27T11:40:55Z) - Modeling Human-AI Team Decision Making [14.368767225297585]
We present a sequence of intellective issues to a set of human groups aided by imperfect AI agents.
A group's goal was to appraise the relative expertise of the group's members and its available AI agents.
We show the value of socio-cognitive constructs of prospect theory, influence dynamics, and Bayesian learning in predicting the behavior of human-AI groups.
arXiv Detail & Related papers (2022-01-08T04:23:23Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.