Meaningful Context, a Red Flag, or Both? Users' Preferences for Enhanced
Misinformation Warnings on Twitter
- URL: http://arxiv.org/abs/2205.01243v1
- Date: Mon, 2 May 2022 22:47:49 GMT
- Title: Meaningful Context, a Red Flag, or Both? Users' Preferences for Enhanced
Misinformation Warnings on Twitter
- Authors: Filipo Sharevski and Amy Devine and Emma Pieroni and Peter Jacnim
- Abstract summary: This study proposes user-tailored improvements in the soft moderation of misinformation on social media.
We ran an A/B evaluation with the Twitter's original warning tags in a 337 participant usability study.
The majority of the participants preferred the enhancements as a nudge toward recognizing and avoiding misinformation.
- Score: 6.748225062396441
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Warning users about misinformation on social media is not a simple usability
task. Soft moderation has to balance between debunking falsehoods and avoiding
moderation bias while preserving the social media consumption flow. Platforms
thus employ minimally distinguishable warning tags with generic text under a
suspected misinformation content. This approach resulted in an unfavorable
outcome where the warnings "backfired" and users believed the misinformation
more, not less. In response, we developed enhancements to the misinformation
warnings where users are advised on the context of the information hazard and
exposed to standard warning iconography. We ran an A/B evaluation with the
Twitter's original warning tags in a 337 participant usability study. The
majority of the participants preferred the enhancements as a nudge toward
recognizing and avoiding misinformation. The enhanced warning tags were most
favored by the politically left-leaning and to a lesser degree moderate
participants, but they also appealed to roughly a third of the right-leaning
participants. The education level was the only demographic factor shaping
participants' preferences. We use our findings to propose user-tailored
improvements in the soft moderation of misinformation on social media.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Does the Source of a Warning Matter? Examining the Effectiveness of Veracity Warning Labels Across Warners [0.0]
We conducted an online, between-subjects experiment to better understand the impact of warning label sources on information trust and sharing intentions.
We found that all significantly decreased trust in false information relative to control, but warnings from AI were modestly more effective.
arXiv Detail & Related papers (2024-07-31T13:27:26Z) - Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - Semi-FairVAE: Semi-supervised Fair Representation Learning with
Adversarial Variational Autoencoder [92.67156911466397]
We propose a semi-supervised fair representation learning approach based on adversarial variational autoencoder.
We use a bias-aware model to capture inherent bias information on sensitive attribute.
We also use a bias-free model to learn debiased fair representations by using adversarial learning to remove bias information from them.
arXiv Detail & Related papers (2022-04-01T15:57:47Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - "I Won the Election!": An Empirical Analysis of Soft Moderation
Interventions on Twitter [0.9391375268580806]
We study the users who share tweets with warning labels on Twitter and their political leaning.
We find that 72% of the tweets with warning labels are shared by Republicans, while only 11% are shared by Democrats.
arXiv Detail & Related papers (2021-01-18T17:39:58Z) - Predicting Misinformation and Engagement in COVID-19 Twitter Discourse
in the First Months of the Outbreak [1.2059055685264957]
We examine nearly 505K COVID-19-related tweets from the initial months of the pandemic to understand misinformation as a function of bot-behavior and engagement.
We found that real users tweet both facts and misinformation, while bots tweet proportionally more misinformation.
arXiv Detail & Related papers (2020-12-03T18:47:34Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Adapting Security Warnings to Counter Online Disinformation [6.592035021489205]
We adapt methods and results from the information security warning literature to design effective disinformation warnings.
We found that users routinely ignore contextual warnings, but users notice interstitial warnings.
We found that a warning's design could effectively inform users or convey a risk of harm.
arXiv Detail & Related papers (2020-08-25T01:10:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.