Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven
decision support
- URL: http://arxiv.org/abs/2302.12389v2
- Date: Mon, 27 Feb 2023 23:42:08 GMT
- Title: Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven
decision support
- Authors: Tim Miller
- Abstract summary: We argue for a paradigm shift from the current model of explainable artificial intelligence (XAI)
In early decision support systems, we assumed that we could give people recommendations and that they would consider them, and then follow them when required.
- Score: 4.452019519213712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we argue for a paradigm shift from the current model of
explainable artificial intelligence (XAI), which may be counter-productive to
better human decision making. In early decision support systems, we assumed
that we could give people recommendations and that they would consider them,
and then follow them when required. However, research found that people often
ignore recommendations because they do not trust them; or perhaps even worse,
people follow them blindly, even when the recommendations are wrong.
Explainable artificial intelligence mitigates this by helping people to
understand how and why models give certain recommendations. However, recent
research shows that people do not always engage with explainability tools
enough to help improve decision making. The assumption that people will engage
with recommendations and explanations has proven to be unfounded. We argue this
is because we have failed to account for two things. First, recommendations
(and their explanations) take control from human decision makers, limiting
their agency. Second, giving recommendations and explanations does not align
with the cognitive processes employed by people making decisions. This position
paper proposes a new conceptual framework called Evaluative AI for explainable
decision support. This is a machine-in-the-loop paradigm in which decision
support tools provide evidence for and against decisions made by people, rather
than provide recommendations to accept or reject. We argue that this mitigates
issues of over- and under-reliance on decision support tools, and better
leverages human expertise in decision making.
Related papers
- Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills [24.04643864795939]
People's decision-making abilities often fail to improve when they rely on AI for decision-support.
Most AI systems offer "unilateral" explanations that justify the AI's decision but do not account for users' thinking.
We introduce a framework for generating human-centered contrastive explanations that explain the difference between AI's choice and a predicted, likely human choice.
arXiv Detail & Related papers (2024-10-05T18:21:04Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Does More Advice Help? The Effects of Second Opinions in AI-Assisted
Decision Making [45.20615051119694]
We explore whether and how the provision of second opinions may affect decision-makers' behavior and performance in AI-assisted decision-making.
We find that if both the AI model's decision recommendation and a second opinion are always presented together, decision-makers reduce their over-reliance on AI.
If decision-makers have the control to decide when to solicit a peer's second opinion, we find that their active solicitations of second opinions have the potential to mitigate over-reliance on AI.
arXiv Detail & Related papers (2024-01-13T12:19:01Z) - Learning Personalized Decision Support Policies [56.949897454209186]
$texttModiste$ is an interactive tool to learn personalized decision support policies.
We find that personalized policies outperform offline policies, and, in the cost-aware setting, reduce the incurred cost with minimal degradation to performance.
arXiv Detail & Related papers (2023-04-13T17:53:34Z) - Do People Engage Cognitively with AI? Impact of AI Assistance on
Incidental Learning [19.324012098032515]
When people receive advice while making difficult decisions, they often make better decisions in the moment and also increase their knowledge in the process.
How do people process the information and advice they receive from AI, and do they engage with it deeply enough to enable learning?
This work provides some of the most direct evidence to date that it may not be sufficient to include explanations together with AI-generated recommendation.
arXiv Detail & Related papers (2022-02-11T01:28:59Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Indecision Modeling [50.00689136829134]
It is important that AI systems act in ways which align with human values.
People are often indecisive, and especially so when their decision has moral implications.
arXiv Detail & Related papers (2020-12-15T18:32:37Z) - Argumentation-based Agents that Explain their Decisions [0.0]
We focus on how an extended model of BDI (Beliefs-Desires-Intentions) agents can be able to generate explanations about their reasoning.
Our proposal is based on argumentation theory, we use arguments to represent the reasons that lead an agent to make a decision.
We propose two types of explanations: the partial one and the complete one.
arXiv Detail & Related papers (2020-09-13T02:08:10Z) - Evidence-based explanation to promote fairness in AI systems [3.190891983147147]
People make decisions and usually, they need to explain their decision to others or in some matter.
In order to explain their decisions with AI support, people need to understand how AI is part of that decision.
We have been exploring an evidence-based explanation design approach to 'tell the story of a decision'
arXiv Detail & Related papers (2020-03-03T14:22:11Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.