"AI enhances our performance, I have no doubt this one will do the
same": The Placebo effect is robust to negative descriptions of AI
- URL: http://arxiv.org/abs/2309.16606v2
- Date: Tue, 23 Jan 2024 10:19:51 GMT
- Title: "AI enhances our performance, I have no doubt this one will do the
same": The Placebo effect is robust to negative descriptions of AI
- Authors: Agnes M. Kloft, Robin Welsch, Thomas Kosch, Steeven Villa
- Abstract summary: Heightened AI expectations facilitate performance in human-AI interactions through placebo effects.
We discuss the impact of user expectations on AI interactions and evaluation.
- Score: 18.760251521240892
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Heightened AI expectations facilitate performance in human-AI interactions
through placebo effects. While lowering expectations to control for placebo
effects is advisable, overly negative expectations could induce nocebo effects.
In a letter discrimination task, we informed participants that an AI would
either increase or decrease their performance by adapting the interface, but in
reality, no AI was present in any condition. A Bayesian analysis showed that
participants had high expectations and performed descriptively better
irrespective of the AI description when a sham-AI was present. Using cognitive
modeling, we could trace this advantage back to participants gathering more
information. A replication study verified that negative AI descriptions do not
alter expectations, suggesting that performance expectations with AI are biased
and robust to negative verbal descriptions. We discuss the impact of user
expectations on AI interactions and evaluation and provide a behavioral placebo
marker for human-AI interaction
Related papers
- AI persuading AI vs AI persuading Humans: LLMs' Differential Effectiveness in Promoting Pro-Environmental Behavior [70.24245082578167]
Pro-environmental behavior (PEB) is vital to combat climate change, yet turning awareness into intention and action remains elusive.
We explore large language models (LLMs) as tools to promote PEB, comparing their impact across 3,200 participants.
Results reveal a "synthetic persuasion paradox": synthetic and simulated agents significantly affect their post-intervention PEB stance, while human responses barely shift.
arXiv Detail & Related papers (2025-03-03T21:40:55Z) - Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - "I Am the One and Only, Your Cyber BFF": Understanding the Impact of GenAI Requires Understanding the Impact of Anthropomorphic AI [55.99010491370177]
We argue that we cannot thoroughly map the social impacts of generative AI without mapping the social impacts of anthropomorphic AI.
anthropomorphic AI systems are increasingly prone to generating outputs that are perceived to be human-like.
arXiv Detail & Related papers (2024-10-11T04:57:41Z) - Confident Teacher, Confident Student? A Novel User Study Design for Investigating the Didactic Potential of Explanations and their Impact on Uncertainty [1.0855602842179624]
We investigate the impact of explanations on human performance on a challenging visual task using Explainable Artificial Intelligence (XAI)
We find that users become more accurate in their annotations and demonstrate less uncertainty with AI assistance.
We also find negative effects of explanations: users tend to replicate the model's predictions more often when shown explanations.
arXiv Detail & Related papers (2024-09-10T12:59:50Z) - Understanding the Effect of Counterfactual Explanations on Trust and
Reliance on AI for Human-AI Collaborative Clinical Decision Making [5.381004207943597]
We conducted an experiment with seven therapists and ten laypersons on the task of assessing post-stroke survivors' quality of motion.
We analyzed their performance, agreement level on the task, and reliance on AI without and with two types of AI explanations.
Our work discusses the potential of counterfactual explanations to better estimate the accuracy of an AI model and reduce over-reliance on wrong' AI outputs.
arXiv Detail & Related papers (2023-08-08T16:23:46Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Improving Human-AI Collaboration With Descriptions of AI Behavior [14.904401331154062]
People work with AI systems to improve their decision making, but often under- or over-rely on AI predictions and perform worse than they would have unassisted.
To help people appropriately rely on AI aids, we propose showing them behavior descriptions.
arXiv Detail & Related papers (2023-01-06T00:33:08Z) - Advancing Human-AI Complementarity: The Impact of User Expertise and
Algorithmic Tuning on Joint Decision Making [10.890854857970488]
Many factors can impact success of Human-AI teams, including a user's domain expertise, mental models of an AI system, trust in recommendations, and more.
Our study examined user performance in a non-trivial blood vessel labeling task where participants indicated whether a given blood vessel was flowing or stalled.
Our results show that while recommendations from an AI-Assistant can aid user decision making, factors such as users' baseline performance relative to the AI and complementary tuning of AI error types significantly impact overall team performance.
arXiv Detail & Related papers (2022-08-16T21:39:58Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Does the Whole Exceed its Parts? The Effect of AI Explanations on
Complementary Team Performance [44.730580857733]
Prior studies observed improvements from explanations only when the AI, alone, outperformed both the human and the best team.
We conduct mixed-method user studies on three datasets, where an AI with accuracy comparable to humans helps participants solve a task.
We find explanations increase the chance that humans will accept the AI's recommendation, regardless of its correctness.
arXiv Detail & Related papers (2020-06-26T03:34:04Z) - Artificial Artificial Intelligence: Measuring Influence of AI
'Assessments' on Moral Decision-Making [48.66982301902923]
We examined the effect of feedback from false AI on moral decision-making about donor kidney allocation.
We found some evidence that judgments about whether a patient should receive a kidney can be influenced by feedback about participants' own decision-making perceived to be given by AI.
arXiv Detail & Related papers (2020-01-13T14:15:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.