AI-washing: The Asymmetric Effects of Its Two Types on Consumer Moral Judgments
- URL: http://arxiv.org/abs/2507.04352v1
- Date: Sun, 06 Jul 2025 11:28:45 GMT
- Title: AI-washing: The Asymmetric Effects of Its Two Types on Consumer Moral Judgments
- Authors: Greg Nyilasy, Harsha Gangadharbatla,
- Abstract summary: This paper introduces AI-washing as overstating (deceptive boasting) or understating (deceptive denial) a company's real AI usage.<n>A 2x2 experiment examines how these false claims affect consumer attitudes and purchase intentions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As AI hype continues to grow, organizations face pressure to broadcast or downplay purported AI initiatives - even when contrary to truth. This paper introduces AI-washing as overstating (deceptive boasting) or understating (deceptive denial) a company's real AI usage. A 2x2 experiment (N = 401) examines how these false claims affect consumer attitudes and purchase intentions. Results reveal a pronounced asymmetry: deceptive denial evokes more negative moral judgments than honest negation, while deceptive boasting has no effects. We show that perceived betrayal mediates these outcomes. By clarifying how AI-washing erodes trust, the study highlights clear ethical implications for policymakers, marketers, and researchers striving for transparency.
Related papers
- How to Disclose? Strategic AI Disclosure in Crowdfunding [10.090562206470329]
We find that mandatory AI disclosure significantly reduces crowdfunding performance.<n>Funds raised decline by 39.8% and backer counts by 23.9% for AI-involved projects.<n>This adverse effect is systematically moderated by disclosure strategy.
arXiv Detail & Related papers (2026-02-17T16:26:03Z) - The Verification-Value Paradox: A Normative Critique of Gen AI in Legal Practice [0.0]
It is often claimed that machine learning-based generative AI products will drastically streamline and reduce the cost of legal practice.<n>This paper argues that a new paradigm is needed to evaluate AI use in practice.<n>Cases in Australia and elsewhere in which lawyers have been reprimanded for submitting inaccurate AI-generated content to courts suggest this paradigm must be revisited.
arXiv Detail & Related papers (2025-10-23T01:26:37Z) - Can Media Act as a Soft Regulator of Safe AI Development? A Game Theoretical Analysis [57.68073583427415]
We study whether media coverage has the potential to push AI creators into the production of safe products.<n>Our results reveal that media is indeed able to foster cooperation between creators and users, but not always.<n>By shaping public perception and holding developers accountable, media emerges as a powerful soft regulator.
arXiv Detail & Related papers (2025-09-02T12:13:34Z) - AI Debate Aids Assessment of Controversial Claims [86.47978525513236]
We study whether AI debate can guide biased judges toward the truth by having two AI systems debate opposing sides of controversial COVID-19 factuality claims.<n>In our human study, we find that debate-where two AI advisor systems present opposing evidence-based arguments-consistently improves judgment accuracy and confidence calibration.<n>In our AI judge study, we find that AI judges with human-like personas achieve even higher accuracy (78.5%) than human judges (70.1%) and default AI judges without personas (69.8%)
arXiv Detail & Related papers (2025-06-02T19:01:53Z) - The AI Double Standard: Humans Judge All AIs for the Actions of One [0.0]
As AI proliferates, perceptions may become entangled via the moral spillover of attitudes towards one AI to attitudes towards other AIs.<n>We tested how the seemingly harmful and immoral actions of an AI or human agent spill over to attitudes towards other AIs or humans in two preregistered experiments.
arXiv Detail & Related papers (2024-12-08T19:26:52Z) - How Performance Pressure Influences AI-Assisted Decision Making [57.53469908423318]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - The Impact of Generative Artificial Intelligence on Market Equilibrium: Evidence from a Natural Experiment [19.963531237647103]
Generative artificial intelligence (AI) exhibits the capability to generate creative content akin to human output with greater efficiency and reduced costs.
This paper empirically investigates the impact of generative AI on market equilibrium, in the context of China's leading art outsourcing platform.
Our analysis shows that the advent of generative AI led to a 64% reduction in average prices, yet it simultaneously spurred a 121% increase in order volume and a 56% increase in overall revenue.
arXiv Detail & Related papers (2023-11-13T04:31:53Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Truthful AI: Developing and governing AI that does not lie [0.26385121748044166]
Lying -- the use of verbal falsehoods to deceive -- is harmful.
While lying has traditionally been a human affair, AI systems are becoming increasingly prevalent.
This raises the question of how we should limit the harm caused by AI "lies"
arXiv Detail & Related papers (2021-10-13T12:18:09Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical
Decisions [0.0]
We find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data.
We suggest digital literacy as a potential remedy to ensure the responsible use of AI.
arXiv Detail & Related papers (2021-06-30T15:19:20Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z) - The corruptive force of AI-generated advice [0.0]
We test whether AI-generated advice can corrupt people.
We also test whether transparency about AI presence mitigates potential harm.
Results reveal that AI's corrupting force is as strong as humans'
arXiv Detail & Related papers (2021-02-15T13:15:12Z) - The Managerial Effects of Algorithmic Fairness Activism [1.942960499551692]
We randomly expose business decision-makers to arguments used in AI fairness activism.
Arguments emphasizing the inescapability of algorithmic bias lead managers to abandon AI for manual review by humans.
Emphasis on status quo comparisons yields opposite effects.
arXiv Detail & Related papers (2020-12-04T04:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.