"Thought I'd Share First" and Other Conspiracy Theory Tweets from the
COVID-19 Infodemic: Exploratory Study
- URL: http://arxiv.org/abs/2012.07729v2
- Date: Thu, 15 Apr 2021 13:56:54 GMT
- Title: "Thought I'd Share First" and Other Conspiracy Theory Tweets from the
COVID-19 Infodemic: Exploratory Study
- Authors: Dax Gerts, Courtney D. Shelley, Nidhi Parikh, Travis Pitts, Chrysm
Watson Ross, Geoffrey Fairchild, Nidia Yadria Vaquera Chavez, Ashlynn R.
Daughton
- Abstract summary: Health-related misinformation threatens adherence to public health messaging.
Monitoring misinformation on social media is critical to understanding the evolution of ideas that have potentially negative public health impacts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background: The COVID-19 outbreak has left many people isolated within their
homes; these people are turning to social media for news and social connection,
which leaves them vulnerable to believing and sharing misinformation.
Health-related misinformation threatens adherence to public health messaging,
and monitoring its spread on social media is critical to understanding the
evolution of ideas that have potentially negative public health impacts.
Results: Analysis using model-labeled data was beneficial for increasing the
proportion of data matching misinformation indicators. Random forest classifier
metrics varied across the four conspiracy theories considered (F1 scores
between 0.347 and 0.857); this performance increased as the given conspiracy
theory was more narrowly defined. We showed that misinformation tweets
demonstrate more negative sentiment when compared to nonmisinformation tweets
and that theories evolve over time, incorporating details from unrelated
conspiracy theories as well as real-world events. Conclusions: Although we
focus here on health-related misinformation, this combination of approaches is
not specific to public health and is valuable for characterizing misinformation
in general, which is an important first step in creating targeted messaging to
counteract its spread. Initial messaging should aim to preempt generalized
misinformation before it becomes widespread, while later messaging will need to
target evolving conspiracy theories and the new facets of each as they become
incorporated.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Folk Models of Misinformation on Social Media [10.667165962654996]
We identify at least five folk models that conceptualize misinformation as either: political (counter)argumentation, out-of-context narratives, inherently fallacious information, external propaganda, or simply entertainment.
We use the rich conceptualizations embodied in these folk models to uncover how social media users minimize adverse reactions to misinformation encounters in their everyday lives.
arXiv Detail & Related papers (2022-07-26T00:40:26Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - "COVID-19 was a FIFA conspiracy #curropt": An Investigation into the
Viral Spread of COVID-19 Misinformation [60.268682953952506]
We estimate the extent to which misinformation has influenced the course of the COVID-19 pandemic using natural language processing models.
We provide a strategy to combat social media posts that are likely to cause widespread harm.
arXiv Detail & Related papers (2022-06-12T19:41:01Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Misinfo Belief Frames: A Case Study on Covid & Climate News [49.979419711713795]
We propose a formalism for understanding how readers perceive the reliability of news and the impact of misinformation.
We introduce the Misinfo Belief Frames (MBF) corpus, a dataset of 66k inferences over 23.5k headlines.
Our results using large-scale language modeling to predict misinformation frames show that machine-generated inferences can influence readers' trust in news headlines.
arXiv Detail & Related papers (2021-04-18T09:50:11Z) - Social Media COVID-19 Misinformation Interventions Viewed Positively,
But Have Limited Impact [16.484676698355884]
Social media platforms like Facebook and Twitter rolled out design interventions, including banners linking to authoritative resources and more specific "false information" labels.
We found that most participants indicated a positive attitude towards interventions, particularly post-specific labels for misinformation.
Still, the majority of participants discovered or corrected misinformation through other means, most commonly web searches, suggesting room for platforms to do more to stem the spread of COVID-19 misinformation.
arXiv Detail & Related papers (2020-12-21T00:02:04Z) - Fighting the COVID-19 Infodemic in Social Media: A Holistic Perspective
and a Call to Arms [42.7332883578842]
With the outbreak of the COVID-19 pandemic, people turned to social media to read and to share timely information.
There was also a new blending of medical and political misinformation and disinformation, which gave rise to the first global infodemic.
This is a complex problem that needs a holistic approach combining the perspectives of journalists, fact-checkers, policymakers, government entities, social media platforms, and society as a whole.
arXiv Detail & Related papers (2020-07-15T21:18:30Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z) - Why do People Share Misinformation during the COVID-19 Pandemic? [0.6963971634605797]
We develop and test a research model hypothesizing why people share unverified COVID-19 information through social media.
Our findings suggest a person's trust in online information and perceived information overload are strong predictors of unverified information sharing.
Females were significantly more likely to suffer from cyberchondria, however, males were more likely to share news without fact checking their source.
arXiv Detail & Related papers (2020-04-20T19:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.