Decoding AI's Nudge: A Unified Framework to Predict Human Behavior in
AI-assisted Decision Making
- URL: http://arxiv.org/abs/2401.05840v1
- Date: Thu, 11 Jan 2024 11:22:36 GMT
- Title: Decoding AI's Nudge: A Unified Framework to Predict Human Behavior in
AI-assisted Decision Making
- Authors: Zhuoyan Li, Zhuoran Lu, Ming Yin
- Abstract summary: We propose a computational framework that can provide an interpretable characterization of the influence of different forms of AI assistance on decision makers.
By conceptualizing AI assistance as the em nudge'' in human decision making processes, our approach centers around modelling how different forms of AI assistance modify humans' strategy in weighing different information in making their decisions.
- Score: 24.258056813524167
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: With the rapid development of AI-based decision aids, different forms of AI
assistance have been increasingly integrated into the human decision making
processes. To best support humans in decision making, it is essential to
quantitatively understand how diverse forms of AI assistance influence humans'
decision making behavior. To this end, much of the current research focuses on
the end-to-end prediction of human behavior using ``black-box'' models, often
lacking interpretations of the nuanced ways in which AI assistance impacts the
human decision making process. Meanwhile, methods that prioritize the
interpretability of human behavior predictions are often tailored for one
specific form of AI assistance, making adaptations to other forms of assistance
difficult. In this paper, we propose a computational framework that can provide
an interpretable characterization of the influence of different forms of AI
assistance on decision makers in AI-assisted decision making. By
conceptualizing AI assistance as the ``{\em nudge}'' in human decision making
processes, our approach centers around modelling how different forms of AI
assistance modify humans' strategy in weighing different information in making
their decisions. Evaluations on behavior data collected from real human
decision makers show that the proposed framework outperforms various baselines
in accurately predicting human behavior in AI-assisted decision making. Based
on the proposed framework, we further provide insights into how individuals
with different cognitive styles are nudged by AI assistance differently.
Related papers
- Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Utilizing Human Behavior Modeling to Manipulate Explanations in AI-Assisted Decision Making: The Good, the Bad, and the Scary [19.884253335528317]
Recent advances in AI models have increased the integration of AI-based decision aids into the human decision making process.
To fully unlock the potential of AI-assisted decision making, researchers have computationally modeled how humans incorporate AI recommendations into their final decisions.
Providing AI explanations to human decision makers to help them rely on AI recommendations more appropriately has become a common practice.
arXiv Detail & Related papers (2024-11-02T18:33:28Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - The Impact of Imperfect XAI on Human-AI Decision-Making [8.305869611846775]
We evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task.
Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance.
arXiv Detail & Related papers (2023-07-25T15:19:36Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Understanding the Role of Human Intuition on Reliance in Human-AI
Decision-Making with Explanations [44.01143305912054]
We study how decision-makers' intuition affects their use of AI predictions and explanations.
Our results identify three types of intuition involved in reasoning about AI predictions and explanations.
We use these pathways to explain why feature-based explanations did not improve participants' decision outcomes and increased their overreliance on AI.
arXiv Detail & Related papers (2023-01-18T01:33:50Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.