The corruptive force of AI-generated advice
- URL: http://arxiv.org/abs/2102.07536v1
- Date: Mon, 15 Feb 2021 13:15:12 GMT
- Title: The corruptive force of AI-generated advice
- Authors: Margarita Leib, Nils C. K\"obis, Rainer Michael Rilke, Marloes Hagens,
Bernd Irlenbusch
- Abstract summary: We test whether AI-generated advice can corrupt people.
We also test whether transparency about AI presence mitigates potential harm.
Results reveal that AI's corrupting force is as strong as humans'
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence (AI) is increasingly becoming a trusted advisor in
people's lives. A new concern arises if AI persuades people to break ethical
rules for profit. Employing a large-scale behavioural experiment (N = 1,572),
we test whether AI-generated advice can corrupt people. We further test whether
transparency about AI presence, a commonly proposed policy, mitigates potential
harm of AI-generated advice. Using the Natural Language Processing algorithm,
GPT-2, we generated honesty-promoting and dishonesty-promoting advice.
Participants read one type of advice before engaging in a task in which they
could lie for profit. Testing human behaviour in interaction with actual AI
outputs, we provide first behavioural insights into the role of AI as an
advisor. Results reveal that AI-generated advice corrupts people, even when
they know the source of the advice. In fact, AI's corrupting force is as strong
as humans'.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations [7.256711790264119]
Hyper-personalized AI systems profile people's characteristics to provide personalized recommendations.
These systems are not immune to errors when making inferences about people's most personal traits.
We present two studies to examine how people react and perceive AI after encountering personality misrepresentations.
arXiv Detail & Related papers (2024-05-25T21:27:15Z) - The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - On the Influence of Explainable AI on Automation Bias [0.0]
We aim to shed light on the potential to influence automation bias by explainable AI (XAI)
We conduct an online experiment with regard to hotel review classifications and discuss first results.
arXiv Detail & Related papers (2022-04-19T12:54:23Z) - Should I Follow AI-based Advice? Measuring Appropriate Reliance in
Human-AI Decision-Making [0.0]
We aim to enable humans not to rely on AI advice blindly but rather to distinguish its quality and act upon it to make better decisions.
Current research lacks a metric for appropriate reliance (AR) on AI advice on a case-by-case basis.
We propose to view AR as a two-dimensional construct that measures the ability to discriminate advice quality and behave accordingly.
arXiv Detail & Related papers (2022-04-14T12:18:51Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI
Interactions [8.785345834486057]
We characterize how humans use AI suggestions relative to equivalent suggestions from a group of peer humans.
We find that participants' beliefs about the human versus AI performance on a given task affects whether or not they heed the advice.
arXiv Detail & Related papers (2021-07-14T21:33:14Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical
Decisions [0.0]
We find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data.
We suggest digital literacy as a potential remedy to ensure the responsible use of AI.
arXiv Detail & Related papers (2021-06-30T15:19:20Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.