Role of Human-AI Interaction in Selective Prediction
- URL: http://arxiv.org/abs/2112.06751v1
- Date: Mon, 13 Dec 2021 16:03:13 GMT
- Title: Role of Human-AI Interaction in Selective Prediction
- Authors: Elizabeth Bondi, Raphael Koster, Hannah Sheahan, Martin Chadwick,
Yoram Bachrach, Taylan Cemgil, Ulrich Paquet, Krishnamurthy Dvijotham
- Abstract summary: We study the impact of communicating different types of information to humans about the AI system's decision to defer.
We show that it is possible to significantly boost human performance by informing the human of the decision to defer, but not revealing the prediction of the AI.
- Score: 20.11364033416315
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work has shown the potential benefit of selective prediction systems
that can learn to defer to a human when the predictions of the AI are
unreliable, particularly to improve the reliability of AI systems in
high-stakes applications like healthcare or conservation. However, most prior
work assumes that human behavior remains unchanged when they solve a prediction
task as part of a human-AI team as opposed to by themselves. We show that this
is not the case by performing experiments to quantify human-AI interaction in
the context of selective prediction. In particular, we study the impact of
communicating different types of information to humans about the AI system's
decision to defer. Using real-world conservation data and a selective
prediction system that improves expected accuracy over that of the human or AI
system working individually, we show that this messaging has a significant
impact on the accuracy of human judgements. Our results study two components of
the messaging strategy: 1) Whether humans are informed about the prediction of
the AI system and 2) Whether they are informed about the decision of the
selective prediction system to defer. By manipulating these messaging
components, we show that it is possible to significantly boost human
performance by informing the human of the decision to defer, but not revealing
the prediction of the AI. We therefore show that it is vital to consider how
the decision to defer is communicated to a human when designing selective
prediction systems, and that the composite accuracy of a human-AI team must be
carefully evaluated using a human-in-the-loop framework.
Related papers
- Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.
It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - A Cognitive Framework for Delegation Between Error-Prone AI and Human
Agents [0.0]
We investigate the use of cognitively inspired models of behavior to predict the behavior of both human and AI agents.
The predicted behavior is used to delegate control between humans and AI agents through the use of an intermediary entity.
arXiv Detail & Related papers (2022-04-06T15:15:21Z) - The Response Shift Paradigm to Quantify Human Trust in AI
Recommendations [6.652641137999891]
Explainability, interpretability and how much they affect human trust in AI systems are ultimately problems of human cognition as much as machine learning.
We developed and validated a general purpose Human-AI interaction paradigm which quantifies the impact of AI recommendations on human decisions.
Our proof-of-principle paradigm allows one to quantitatively compare the rapidly growing set of XAI/IAI approaches in terms of their effect on the end-user.
arXiv Detail & Related papers (2022-02-16T22:02:09Z) - Uncalibrated Models Can Improve Human-AI Collaboration [10.106324182884068]
We show that presenting AI models as more confident than they actually are can improve human-AI performance.
We first learn a model for how humans incorporate AI advice using data from thousands of human interactions.
arXiv Detail & Related papers (2022-02-12T04:51:00Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Probabilistic Human Motion Prediction via A Bayesian Neural Network [71.16277790708529]
We propose a probabilistic model for human motion prediction in this paper.
Our model could generate several future motions when given an observed motion sequence.
We extensively validate our approach on a large scale benchmark dataset Human3.6m.
arXiv Detail & Related papers (2021-07-14T09:05:33Z) - Learning Models of Individual Behavior in Chess [4.793072503820555]
We develop highly accurate predictive models of individual human behavior in chess.
Our work demonstrates a way to bring AI systems into better alignment with the behavior of individual people.
arXiv Detail & Related papers (2020-08-23T18:24:21Z) - Does Explainable Artificial Intelligence Improve Human Decision-Making? [17.18994675838646]
We compare and evaluate objective human decision accuracy without AI (control), with an AI prediction (no explanation) and AI prediction with explanation.
We find any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact.
Our results indicate that, at least in some situations, the "why" information provided in explainable AI may not enhance user decision-making.
arXiv Detail & Related papers (2020-06-19T15:46:13Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.