Deceptive AI Systems That Give Explanations Are Just as Convincing as
Honest AI Systems in Human-Machine Decision Making
- URL: http://arxiv.org/abs/2210.08960v1
- Date: Fri, 23 Sep 2022 20:09:03 GMT
- Title: Deceptive AI Systems That Give Explanations Are Just as Convincing as
Honest AI Systems in Human-Machine Decision Making
- Authors: Valdemar Danry, Pat Pataranutaporn, Ziv Epstein, Matthew Groh and
Pattie Maes
- Abstract summary: The ability to discern between true and false information is essential to making sound decisions.
With the recent increase in AI-based disinformation campaigns, it has become critical to understand the influence of deceptive systems on human information processing.
- Score: 38.71592583606443
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The ability to discern between true and false information is essential to
making sound decisions. However, with the recent increase in AI-based
disinformation campaigns, it has become critical to understand the influence of
deceptive systems on human information processing. In experiment (N=128), we
investigated how susceptible people are to deceptive AI systems by examining
how their ability to discern true news from fake news varies when AI systems
are perceived as either human fact-checkers or AI fact-checking systems, and
when explanations provided by those fact-checkers are either deceptive or
honest. We find that deceitful explanations significantly reduce accuracy,
indicating that people are just as likely to believe deceptive AI explanations
as honest AI explanations. Although before getting assistance from an
AI-system, people have significantly higher weighted discernment accuracy on
false headlines than true headlines, we found that with assistance from an AI
system, discernment accuracy increased significantly when given honest
explanations on both true headlines and false headlines, and decreased
significantly when given deceitful explanations on true headlines and false
headlines. Further, we did not observe any significant differences in
discernment between explanations perceived as coming from a human fact checker
compared to an AI-fact checker. Similarly, we found no significant differences
in trust. These findings exemplify the dangers of deceptive AI systems and the
need for finding novel ways to limit their influence human information
processing.
Related papers
- On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Deceptive AI systems that give explanations are more convincing than honest AI systems and can amplify belief in misinformation [29.022316418575866]
We examined the impact of deceptive AI generated explanations on individuals' beliefs.
Our results show that personal factors such as cognitive reflection and trust in AI do not necessarily protect individuals from these effects.
This underscores the importance of teaching logical reasoning and critical thinking skills to identify logically invalid arguments.
arXiv Detail & Related papers (2024-07-31T05:39:07Z) - Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations [7.256711790264119]
Hyper-personalized AI systems profile people's characteristics to provide personalized recommendations.
These systems are not immune to errors when making inferences about people's most personal traits.
We present two studies to examine how people react and perceive AI after encountering personality misrepresentations.
arXiv Detail & Related papers (2024-05-25T21:27:15Z) - Alterfactual Explanations -- The Relevance of Irrelevance for Explaining
AI Systems [0.9542023122304099]
We argue that in order to fully understand a decision, not only knowledge about relevant features is needed, but that the awareness of irrelevant information also highly contributes to the creation of a user's mental model of an AI system.
Our approach, which we call Alterfactual Explanations, is based on showing an alternative reality where irrelevant features of an AI's input are altered.
We show that alterfactual explanations are suited to convey an understanding of different aspects of the AI's reasoning than established counterfactual explanation methods.
arXiv Detail & Related papers (2022-07-19T16:20:37Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Does Explainable Artificial Intelligence Improve Human Decision-Making? [17.18994675838646]
We compare and evaluate objective human decision accuracy without AI (control), with an AI prediction (no explanation) and AI prediction with explanation.
We find any kind of AI prediction tends to improve user decision accuracy, but no conclusive evidence that explainable AI has a meaningful impact.
Our results indicate that, at least in some situations, the "why" information provided in explainable AI may not enhance user decision-making.
arXiv Detail & Related papers (2020-06-19T15:46:13Z) - Deceptive AI Explanations: Creation and Detection [3.197020142231916]
We investigate how AI models can be used to create and detect deceptive explanations.
As an empirical evaluation, we focus on text classification and alter the explanations generated by GradCAM.
We evaluate the effect of deceptive explanations on users in an experiment with 200 participants.
arXiv Detail & Related papers (2020-01-21T16:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.