Harnessing Explanations to Bridge AI and Humans
- URL: http://arxiv.org/abs/2003.07370v1
- Date: Mon, 16 Mar 2020 18:00:02 GMT
- Title: Harnessing Explanations to Bridge AI and Humans
- Authors: Vivian Lai, Samuel Carton, Chenhao Tan
- Abstract summary: Machine learning models are increasingly integrated into societally critical applications such as recidivism prediction and medical diagnosis.
We propose future directions for closing the gap between the efficacy of explanations and improvement in human performance.
- Score: 14.354362614416285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning models are increasingly integrated into societally critical
applications such as recidivism prediction and medical diagnosis, thanks to
their superior predictive power. In these applications, however, full
automation is often not desired due to ethical and legal concerns. The research
community has thus ventured into developing interpretable methods that explain
machine predictions. While these explanations are meant to assist humans in
understanding machine predictions and thereby allowing humans to make better
decisions, this hypothesis is not supported in many recent studies. To improve
human decision-making with AI assistance, we propose future directions for
closing the gap between the efficacy of explanations and improvement in human
performance.
Related papers
- What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Role of Human-AI Interaction in Selective Prediction [20.11364033416315]
We study the impact of communicating different types of information to humans about the AI system's decision to defer.
We show that it is possible to significantly boost human performance by informing the human of the decision to defer, but not revealing the prediction of the AI.
arXiv Detail & Related papers (2021-12-13T16:03:13Z) - Probabilistic Human Motion Prediction via A Bayesian Neural Network [71.16277790708529]
We propose a probabilistic model for human motion prediction in this paper.
Our model could generate several future motions when given an observed motion sequence.
We extensively validate our approach on a large scale benchmark dataset Human3.6m.
arXiv Detail & Related papers (2021-07-14T09:05:33Z) - Drivers' Manoeuvre Modelling and Prediction for Safe HRI [0.0]
Theory of Mind has been broadly explored for robotics and recently for autonomous and semi-autonomous vehicles.
We explored how to predict human intentions before an action is performed by combining data from human-motion, vehicle-state and human inputs.
arXiv Detail & Related papers (2021-06-03T10:07:55Z) - Enhancing Human-Machine Teaming for Medical Prognosis Through Neural
Ordinary Differential Equations (NODEs) [0.0]
A key barrier to the full realization of Machine Learning's potential in medical prognoses is technology acceptance.
Recent efforts to produce explainable AI (XAI) have made progress in improving the interpretability of some ML models.
We propose a novel ML architecture to enhance human understanding and encourage acceptability.
arXiv Detail & Related papers (2021-02-08T10:52:23Z) - Indecision Modeling [50.00689136829134]
It is important that AI systems act in ways which align with human values.
People are often indecisive, and especially so when their decision has moral implications.
arXiv Detail & Related papers (2020-12-15T18:32:37Z) - Does Explainable Artificial Intelligence Improve Human Decision-Making? [17.18994675838646]
We compare and evaluate objective human decision accuracy without AI (control), with an AI prediction (no explanation) and AI prediction with explanation.
We find any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact.
Our results indicate that, at least in some situations, the "why" information provided in explainable AI may not enhance user decision-making.
arXiv Detail & Related papers (2020-06-19T15:46:13Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - A Study on Multimodal and Interactive Explanations for Visual Question
Answering [3.086885687016963]
We evaluate multimodal explanations in the setting of a Visual Question Answering (VQA) task.
Results indicate that the explanations help improve human prediction accuracy, especially in trials when the VQA system's answer is inaccurate.
We introduce active attention, a novel method for evaluating causal attentional effects through intervention by editing attention maps.
arXiv Detail & Related papers (2020-03-01T07:54:01Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.