Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making
- URL: http://arxiv.org/abs/2403.16812v1
- Date: Mon, 25 Mar 2024 14:34:06 GMT
- Title: Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making
- Authors: Shuai Ma, Qiaoyi Chen, Xinru Wang, Chengbo Zheng, Zhenhui Peng, Ming Yin, Xiaojuan Ma,
- Abstract summary: In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
- Score: 47.33241893184721
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole. In such a paradigm, humans are found to rarely trigger analytical thinking and face difficulties in communicating the nuances of conflicting opinions to the AI when disagreements occur. To tackle this challenge, we propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making. Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates. To empower AI with deliberative capabilities, we designed Deliberative AI, which leverages large language models (LLMs) as a bridge between humans and domain-specific models to enable flexible conversational interactions and faithful information provision. An exploratory evaluation on a graduate admissions task shows that Deliberative AI outperforms conventional explainable AI (XAI) assistants in improving humans' appropriate reliance and task performance. Based on a mixed-methods analysis of participant behavior, perception, user experience, and open-ended feedback, we draw implications for future AI-assisted decision tool design.
Related papers
- Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Utilizing Human Behavior Modeling to Manipulate Explanations in AI-Assisted Decision Making: The Good, the Bad, and the Scary [19.884253335528317]
Recent advances in AI models have increased the integration of AI-based decision aids into the human decision making process.
To fully unlock the potential of AI-assisted decision making, researchers have computationally modeled how humans incorporate AI recommendations into their final decisions.
Providing AI explanations to human decision makers to help them rely on AI recommendations more appropriately has become a common practice.
arXiv Detail & Related papers (2024-11-02T18:33:28Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Beyond Recommender: An Exploratory Study of the Effects of Different AI
Roles in AI-Assisted Decision Making [48.179458030691286]
We examine three AI roles: Recommender, Analyzer, and Devil's Advocate.
Our results show each role's distinct strengths and limitations in task performance, reliance appropriateness, and user experience.
These insights offer valuable implications for designing AI assistants with adaptive functional roles according to different situations.
arXiv Detail & Related papers (2024-03-04T07:32:28Z) - Decoding AI's Nudge: A Unified Framework to Predict Human Behavior in
AI-assisted Decision Making [24.258056813524167]
We propose a computational framework that can provide an interpretable characterization of the influence of different forms of AI assistance on decision makers.
By conceptualizing AI assistance as the em nudge'' in human decision making processes, our approach centers around modelling how different forms of AI assistance modify humans' strategy in weighing different information in making their decisions.
arXiv Detail & Related papers (2024-01-11T11:22:36Z) - Training Towards Critical Use: Learning to Situate AI Predictions
Relative to Human Knowledge [22.21959942886099]
We introduce a process-oriented notion of appropriate reliance called critical use that centers the human's ability to situate AI predictions against knowledge that is uniquely available to them but unavailable to the AI model.
We conduct a randomized online experiment in a complex social decision-making setting: child maltreatment screening.
We find that, by providing participants with accelerated, low-stakes opportunities to practice AI-assisted decision-making, novices came to exhibit patterns of disagreement with AI that resemble those of experienced workers.
arXiv Detail & Related papers (2023-08-30T01:54:31Z) - The Impact of Imperfect XAI on Human-AI Decision-Making [8.305869611846775]
We evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task.
Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance.
arXiv Detail & Related papers (2023-07-25T15:19:36Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Modeling Human-AI Team Decision Making [14.368767225297585]
We present a sequence of intellective issues to a set of human groups aided by imperfect AI agents.
A group's goal was to appraise the relative expertise of the group's members and its available AI agents.
We show the value of socio-cognitive constructs of prospect theory, influence dynamics, and Bayesian learning in predicting the behavior of human-AI groups.
arXiv Detail & Related papers (2022-01-08T04:23:23Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.