The Role of Heuristics and Biases During Complex Choices with an AI
Teammate
- URL: http://arxiv.org/abs/2301.05969v1
- Date: Sat, 14 Jan 2023 20:06:43 GMT
- Title: The Role of Heuristics and Biases During Complex Choices with an AI
Teammate
- Authors: Nikolos Gurney, John H. Miller, David V. Pynadath
- Abstract summary: We argue that classic experimental methods are insufficient for studying complex choices made with AI helpers.
We show that framing and anchoring effects impact how people work with an AI helper and are predictive of choice outcomes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Behavioral scientists have classically documented aversion to algorithmic
decision aids, from simple linear models to AI. Sentiment, however, is changing
and possibly accelerating AI helper usage. AI assistance is, arguably, most
valuable when humans must make complex choices. We argue that classic
experimental methods used to study heuristics and biases are insufficient for
studying complex choices made with AI helpers. We adapted an experimental
paradigm designed for studying complex choices in such contexts. We show that
framing and anchoring effects impact how people work with an AI helper and are
predictive of choice outcomes. The evidence suggests that some participants,
particularly those in a loss frame, put too much faith in the AI helper and
experienced worse choice outcomes by doing so. The paradigm also generates
computational modeling-friendly data allowing future studies of human-AI
decision making.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Training Towards Critical Use: Learning to Situate AI Predictions
Relative to Human Knowledge [22.21959942886099]
We introduce a process-oriented notion of appropriate reliance called critical use that centers the human's ability to situate AI predictions against knowledge that is uniquely available to them but unavailable to the AI model.
We conduct a randomized online experiment in a complex social decision-making setting: child maltreatment screening.
We find that, by providing participants with accelerated, low-stakes opportunities to practice AI-assisted decision-making, novices came to exhibit patterns of disagreement with AI that resemble those of experienced workers.
arXiv Detail & Related papers (2023-08-30T01:54:31Z) - Improving Human-AI Collaboration With Descriptions of AI Behavior [14.904401331154062]
People work with AI systems to improve their decision making, but often under- or over-rely on AI predictions and perform worse than they would have unassisted.
To help people appropriately rely on AI aids, we propose showing them behavior descriptions.
arXiv Detail & Related papers (2023-01-06T00:33:08Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - On the Influence of Explainable AI on Automation Bias [0.0]
We aim to shed light on the potential to influence automation bias by explainable AI (XAI)
We conduct an online experiment with regard to hotel review classifications and discuss first results.
arXiv Detail & Related papers (2022-04-19T12:54:23Z) - Uncalibrated Models Can Improve Human-AI Collaboration [10.106324182884068]
We show that presenting AI models as more confident than they actually are can improve human-AI performance.
We first learn a model for how humans incorporate AI advice using data from thousands of human interactions.
arXiv Detail & Related papers (2022-02-12T04:51:00Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Does the Whole Exceed its Parts? The Effect of AI Explanations on
Complementary Team Performance [44.730580857733]
Prior studies observed improvements from explanations only when the AI, alone, outperformed both the human and the best team.
We conduct mixed-method user studies on three datasets, where an AI with accuracy comparable to humans helps participants solve a task.
We find explanations increase the chance that humans will accept the AI's recommendation, regardless of its correctness.
arXiv Detail & Related papers (2020-06-26T03:34:04Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.