Understanding the Effect of Out-of-distribution Examples and Interactive
Explanations on Human-AI Decision Making
- URL: http://arxiv.org/abs/2101.05303v2
- Date: Wed, 27 Jan 2021 19:02:32 GMT
- Title: Understanding the Effect of Out-of-distribution Examples and Interactive
Explanations on Human-AI Decision Making
- Authors: Han Liu, Vivian Lai, Chenhao Tan
- Abstract summary: We argue that the typical experimental setup limits the potential of human-AI teams.
We develop novel interfaces to support interactive explanations so that humans can actively engage with AI assistance.
- Score: 19.157591744997355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although AI holds promise for improving human decision making in societally
critical domains, it remains an open question how human-AI teams can reliably
outperform AI alone and human alone in challenging prediction tasks (also known
as complementary performance). We explore two directions to understand the gaps
in achieving complementary performance. First, we argue that the typical
experimental setup limits the potential of human-AI teams. To account for lower
AI performance out-of-distribution than in-distribution because of distribution
shift, we design experiments with different distribution types and investigate
human performance for both in-distribution and out-of-distribution examples.
Second, we develop novel interfaces to support interactive explanations so that
humans can actively engage with AI assistance. Using in-person user study and
large-scale randomized experiments across three tasks, we demonstrate a clear
difference between in-distribution and out-of-distribution, and observe mixed
results for interactive explanations: while interactive explanations improve
human perception of AI assistance's usefulness, they may magnify human biases
and lead to limited performance improvement. Overall, our work points out
critical challenges and future directions towards complementary performance.
Related papers
- Let people fail! Exploring the influence of explainable virtual and robotic agents in learning-by-doing tasks [45.23431596135002]
This study compares the effects of classic vs. partner-aware explanations on human behavior and performance during a learning-by-doing task.
Results indicated that partner-aware explanations influenced participants differently based on the type of artificial agents involved.
arXiv Detail & Related papers (2024-11-15T13:22:04Z) - Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - On the Effect of Contextual Information on Human Delegation Behavior in
Human-AI collaboration [3.9253315480927964]
We study the effects of providing contextual information on human decisions to delegate instances to an AI.
We find that providing participants with contextual information significantly improves the human-AI team performance.
This research advances the understanding of human-AI interaction in human delegation and provides actionable insights for designing more effective collaborative systems.
arXiv Detail & Related papers (2024-01-09T18:59:47Z) - Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review [6.013543974938446]
Leveraging Artificial Intelligence in decision support systems has disproportionately focused on technological advancements.
A human-centered perspective attempts to alleviate this concern by designing AI solutions for seamless integration with existing processes.
arXiv Detail & Related papers (2023-10-30T17:46:38Z) - BO-Muse: A human expert and AI teaming framework for accelerated
experimental design [58.61002520273518]
Our algorithm lets the human expert take the lead in the experimental process.
We show that our algorithm converges sub-linearly, at a rate faster than the AI or human alone.
arXiv Detail & Related papers (2023-03-03T02:56:05Z) - Learning Complementary Policies for Human-AI Teams [22.13683008398939]
We propose a framework for a novel human-AI collaboration for selecting advantageous course of action.
Our solution aims to exploit the human-AI complementarity to maximize decision rewards.
arXiv Detail & Related papers (2023-02-06T17:22:18Z) - Selective Explanations: Leveraging Human Input to Align Explainable AI [40.33998268146951]
We propose a general framework for generating selective explanations by leveraging human input on a small sample.
As a showcase, we use a decision-support task to explore selective explanations based on what the decision-maker would consider relevant to the decision task.
Our experiments demonstrate the promise of selective explanations in reducing over-reliance on AI.
arXiv Detail & Related papers (2023-01-23T19:00:02Z) - Human-AI Collaboration via Conditional Delegation: A Case Study of
Content Moderation [47.102566259034326]
We propose conditional delegation as an alternative paradigm for human-AI collaboration.
We develop novel interfaces to assist humans in creating conditional delegation rules.
Our study demonstrates the promise of conditional delegation in improving model performance.
arXiv Detail & Related papers (2022-04-25T17:00:02Z) - Human-Algorithm Collaboration: Achieving Complementarity and Avoiding
Unfairness [92.26039686430204]
We show that even in carefully-designed systems, complementary performance can be elusive.
First, we provide a theoretical framework for modeling simple human-algorithm systems.
Next, we use this model to prove conditions where complementarity is impossible.
arXiv Detail & Related papers (2022-02-17T18:44:41Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.