Does the Source of a Warning Matter? Examining the Effectiveness of Veracity Warning Labels Across Warners
- URL: http://arxiv.org/abs/2407.21592v1
- Date: Wed, 31 Jul 2024 13:27:26 GMT
- Title: Does the Source of a Warning Matter? Examining the Effectiveness of Veracity Warning Labels Across Warners
- Authors: Benjamin D. Horne,
- Abstract summary: We conducted an online, between-subjects experiment to better understand the impact of warning label sources on information trust and sharing intentions.
We found that all significantly decreased trust in false information relative to control, but warnings from AI were modestly more effective.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we conducted an online, between-subjects experiment (N = 2,049) to better understand the impact of warning label sources on information trust and sharing intentions. Across four warners (the social media platform, other social media users, Artificial Intelligence (AI), and fact checkers), we found that all significantly decreased trust in false information relative to control, but warnings from AI were modestly more effective. All warners significantly decreased the sharing intentions of false information, except warnings from other social media users. AI was again the most effective. These results were moderated by prior trust in media and the information itself. Most noteworthy, we found that warning labels from AI were significantly more effective than all other warning labels for participants who reported a low trust in news organizations, while warnings from AI were no more effective than any other warning label for participants who reported a high trust in news organizations.
Related papers
- Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - The effect of source disclosure on evaluation of AI-generated messages:
A two-part study [0.0]
We examined the influence of source disclosure on people's evaluation of AI-generated health prevention messages.
We found that source disclosure significantly impacted the evaluation of the messages but did not significantly alter message rankings.
For those with moderate levels of negative attitudes towards AI, source disclosure decreased the preference for AI-generated messages.
arXiv Detail & Related papers (2023-11-27T05:20:47Z) - Nothing Stands Alone: Relational Fake News Detection with Hypergraph
Neural Networks [49.29141811578359]
We propose to leverage a hypergraph to represent group-wise interaction among news, while focusing on important news relations with its dual-level attention mechanism.
Our approach yields remarkable performance and maintains the high performance even with a small subset of labeled news data.
arXiv Detail & Related papers (2022-12-24T00:19:32Z) - Deceptive AI Systems That Give Explanations Are Just as Convincing as
Honest AI Systems in Human-Machine Decision Making [38.71592583606443]
The ability to discern between true and false information is essential to making sound decisions.
With the recent increase in AI-based disinformation campaigns, it has become critical to understand the influence of deceptive systems on human information processing.
arXiv Detail & Related papers (2022-09-23T20:09:03Z) - Meaningful Context, a Red Flag, or Both? Users' Preferences for Enhanced
Misinformation Warnings on Twitter [6.748225062396441]
This study proposes user-tailored improvements in the soft moderation of misinformation on social media.
We ran an A/B evaluation with the Twitter's original warning tags in a 337 participant usability study.
The majority of the participants preferred the enhancements as a nudge toward recognizing and avoiding misinformation.
arXiv Detail & Related papers (2022-05-02T22:47:49Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - Cybersecurity Misinformation Detection on Social Media: Case Studies on
Phishing Reports and Zoom's Threats [1.2387676601792899]
We propose novel approaches for detecting misinformation about cybersecurity and privacy threats on social media.
We developed a framework for detecting inaccurate phishing claims on Twitter.
We also proposed another framework for detecting misinformation related to Zoom's security and privacy threats on multiple platforms.
arXiv Detail & Related papers (2021-10-23T20:45:24Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Adapting Security Warnings to Counter Online Disinformation [6.592035021489205]
We adapt methods and results from the information security warning literature to design effective disinformation warnings.
We found that users routinely ignore contextual warnings, but users notice interstitial warnings.
We found that a warning's design could effectively inform users or convey a risk of harm.
arXiv Detail & Related papers (2020-08-25T01:10:57Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.