AI labeling reduces the perceived accuracy of online content but has limited broader effects
- URL: http://arxiv.org/abs/2506.16202v1
- Date: Thu, 19 Jun 2025 10:32:52 GMT
- Title: AI labeling reduces the perceived accuracy of online content but has limited broader effects
- Authors: Chuyao Wang, Patrick Sturgis, Daniel de Kadt,
- Abstract summary: We show that explicit AI labeling of a news article about a proposed public policy reduces its perceived accuracy.<n>We find that AI labeling reduces interest in the policy, but neither influences support for the policy nor triggers general concerns about online misinformation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explicit labeling of online content produced by artificial intelligence (AI) is a widely mooted policy for ensuring transparency and promoting public confidence. Yet little is known about the scope of AI labeling effects on public assessments of labeled content. We contribute new evidence on this question from a survey experiment using a high-quality nationally representative probability sample (n = 3,861). First, we demonstrate that explicit AI labeling of a news article about a proposed public policy reduces its perceived accuracy. Second, we test whether there are spillover effects in terms of policy interest, policy support, and general concerns about online misinformation. We find that AI labeling reduces interest in the policy, but neither influences support for the policy nor triggers general concerns about online misinformation. We further find that increasing the salience of AI use reduces the negative impact of AI labeling on perceived accuracy, while one-sided versus two-sided framing of the policy has no moderating effect. Overall, our findings suggest that the effects of algorithm aversion induced by AI labeling of online content are limited in scope.
Related papers
- Advancing Science- and Evidence-based AI Policy [163.43609502905707]
This paper tackles the problem of how to optimize the relationship between evidence and policy to address the opportunities and challenges of AI.<n>An increasing number of efforts address this problem by often either (i) contributing research into the risks of AI and their effective mitigation or (ii) advocating for policy to address these risks.
arXiv Detail & Related papers (2025-08-02T23:20:58Z) - Security Benefits and Side Effects of Labeling AI-Generated Images [27.771584371064968]
We study the implications of labels, including the possibility of mislabeling.<n>We conduct a pre-registered online survey with over 1300 U.S. and EU participants.<n>We find the undesired side effect that human-made images conveying inaccurate claims were perceived as more credible in the presence of labels.
arXiv Detail & Related papers (2025-05-28T20:24:45Z) - Labeling Messages as AI-Generated Does Not Reduce Their Persuasive Effects [33.16943695290958]
One prominent policy proposal requires explicitly labeling AI-generated content to increase transparency and encourage critical thinking about the information.<n>We conducted a survey experiment on a diverse sample of Americans.<n>We found that messages were generally persuasive, influencing participants' views of the policies by 9.74 percentage points on average.
arXiv Detail & Related papers (2025-04-14T04:22:39Z) - Artificial Intelligence in Deliberation: The AI Penalty and the Emergence of a New Deliberative Divide [0.0]
Digital deliberation has expanded democratic participation, yet challenges remain.<n>Recent advances in artificial intelligence (AI) offer potential solutions, but public perceptions of AI's role in deliberation remain underexplored.<n>If AI is integrated into deliberation, public trust, acceptance, and willingness to participate may be affected.
arXiv Detail & Related papers (2025-03-10T16:33:15Z) - Almost AI, Almost Human: The Challenge of Detecting AI-Polished Writing [55.2480439325792]
This study systematically evaluations twelve state-of-the-art AI-text detectors using our AI-Polished-Text Evaluation dataset.<n>Our findings reveal that detectors frequently flag even minimally polished text as AI-generated, struggle to differentiate between degrees of AI involvement, and exhibit biases against older and smaller models.
arXiv Detail & Related papers (2025-02-21T18:45:37Z) - From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate [69.05573887799203]
We argue that understanding these second-order impacts requires an interdisciplinary approach, combining lifecycle assessments with socio-economic analyses.<n>We contend that a narrow focus on direct emissions misrepresents AI's true climate footprint, limiting the scope for meaningful interventions.
arXiv Detail & Related papers (2025-01-27T22:45:06Z) - On the Use of Proxies in Political Ad Targeting [49.61009579554272]
We show that major political advertisers circumvented mitigations by targeting proxy attributes.
Our findings have crucial implications for the ongoing discussion on the regulation of political advertising.
arXiv Detail & Related papers (2024-10-18T17:15:13Z) - Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in large language models (LLMs) on political opinions and decision-making.<n>We found that participants exposed to partisan biased models were significantly more likely to adopt opinions and make decisions which matched the LLM's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - Artificial Intelligence in Election Campaigns: Perceptions, Penalties, and Implications [44.99833362998488]
We identify three categories of AI use -- campaign operations, voter outreach, and deception.<n>While people generally dislike AI in campaigns, they are especially critical of deceptive uses, which they perceive as norm violations.<n>Deception AI use increases public support for stricter AI regulation, including calls for an outright ban on AI development.
arXiv Detail & Related papers (2024-08-08T12:58:20Z) - Certification Labels for Trustworthy AI: Insights From an Empirical
Mixed-Method Study [0.0]
This study empirically investigated certification labels as a promising solution.
We demonstrate that labels can significantly increase end-users' trust and willingness to use AI.
However, end-users' preferences for certification labels and their effect on trust and willingness to use AI were more pronounced in high-stake scenarios.
arXiv Detail & Related papers (2023-05-15T09:51:10Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.