The human-AI relationship in decision-making: AI explanation to support
people on justifying their decisions
- URL: http://arxiv.org/abs/2102.05460v1
- Date: Wed, 10 Feb 2021 14:28:34 GMT
- Title: The human-AI relationship in decision-making: AI explanation to support
people on justifying their decisions
- Authors: Juliana Jansen Ferreira and Mateus Monteiro
- Abstract summary: People need more awareness of how AI works and its outcomes to build a relationship with that system.
In decision-making scenarios, people need more awareness of how AI works and its outcomes to build a relationship with that system.
- Score: 4.169915659794568
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The explanation dimension of Artificial Intelligence (AI) based system has
been a hot topic for the past years. Different communities have raised concerns
about the increasing presence of AI in people's everyday tasks and how it can
affect people's lives. There is a lot of research addressing the
interpretability and transparency concepts of explainable AI (XAI), which are
usually related to algorithms and Machine Learning (ML) models. But in
decision-making scenarios, people need more awareness of how AI works and its
outcomes to build a relationship with that system. Decision-makers usually need
to justify their decision to others in different domains. If that decision is
somehow based on or influenced by an AI-system outcome, the explanation about
how the AI reached that result is key to building trust between AI and humans
in decision-making scenarios. In this position paper, we discuss the role of
XAI in decision-making scenarios, our vision of Decision-Making with AI-system
in the loop, and explore one case from the literature about how XAI can impact
people justifying their decisions, considering the importance of building the
human-AI relationship for those scenarios.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills [24.04643864795939]
People's decision-making abilities often fail to improve when they rely on AI for decision-support.
Most AI systems offer "unilateral" explanations that justify the AI's decision but do not account for users' thinking.
We introduce a framework for generating human-centered contrastive explanations that explain the difference between AI's choice and a predicted, likely human choice.
arXiv Detail & Related papers (2024-10-05T18:21:04Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Investigating the Role of Explainability and AI Literacy in User Compliance [2.8623940003518156]
We find that users' compliance increases with the introduction of XAI but is also affected by AI literacy.
We also find that the relationships between AI literacy XAI and users' compliance are mediated by the users' mental model of AI.
arXiv Detail & Related papers (2024-06-18T14:28:12Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Explainable Artificial Intelligence Approaches: A Survey [0.22940141855172028]
Lack of explainability of a decision from an Artificial Intelligence based "black box" system/model is a key stumbling block for adopting AI in high stakes applications.
We demonstrate popular Explainable Artificial Intelligence (XAI) methods with a mutual case study/task.
We analyze for competitive advantages from multiple perspectives.
We recommend paths towards responsible or human-centered AI using XAI as a medium.
arXiv Detail & Related papers (2021-01-23T06:15:34Z) - Evidence-based explanation to promote fairness in AI systems [3.190891983147147]
People make decisions and usually, they need to explain their decision to others or in some matter.
In order to explain their decisions with AI support, people need to understand how AI is part of that decision.
We have been exploring an evidence-based explanation design approach to 'tell the story of a decision'
arXiv Detail & Related papers (2020-03-03T14:22:11Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.