Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes
- URL: http://arxiv.org/abs/2206.15237v1
- Date: Thu, 30 Jun 2022 12:34:24 GMT
- Title: Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes
- Authors: Alexandros Efstratiou and Emiliano De Cristofaro
- Abstract summary: We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
- Score: 79.79659145328856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous work suggests that people's preference for different kinds of
information depends on more than just accuracy. This could happen because the
messages contained within different pieces of information may either be
well-liked or repulsive. Whereas factual information must often convey
uncomfortable truths, misinformation can have little regard for veracity and
leverage psychological processes which increase its attractiveness and
proliferation on social media. In this review, we argue that when
misinformation proliferates, this happens because the social media environment
enables adherence to misinformation by reducing, rather than increasing, the
psychological cost of doing so. We cover how attention may often be shifted
away from accuracy and towards other goals, how social and individual cognition
is affected by misinformation and the cases under which debunking it is most
effective, and how the formation of online groups affects information
consumption patterns, often leading to more polarization and radicalization.
Throughout, we make the case that polarization and misinformation adherence are
closely tied. We identify ways in which the psychological cost of adhering to
misinformation can be increased when designing anti-misinformation
interventions or resilient affordances, and we outline open research questions
that the CSCW community can take up in further understanding this cost.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Correcting misinformation on social media with a large language model [14.69780455372507]
Real-world misinformation, often multimodal, can be misleading using diverse tactics like conflating correlation with causation.
Such misinformation is severely understudied, challenging to address, and harms various social domains, particularly on social media.
We propose MUSE, an LLM augmented with access to and credibility evaluation of up-to-date information.
arXiv Detail & Related papers (2024-03-17T10:59:09Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - Diverse Misinformation: Impacts of Human Biases on Detection of
Deepfakes on Networks [1.5910150494847917]
We call "diverse misinformation" the complex relationships between human biases and demographics represented in misinformation.
We find that accuracy varies by demographics, and participants are generally better at classifying videos that match them.
Our model suggests that diverse contacts might provide "herd correction" where friends can protect each other.
arXiv Detail & Related papers (2022-10-18T17:49:53Z) - Folk Models of Misinformation on Social Media [10.667165962654996]
We identify at least five folk models that conceptualize misinformation as either: political (counter)argumentation, out-of-context narratives, inherently fallacious information, external propaganda, or simply entertainment.
We use the rich conceptualizations embodied in these folk models to uncover how social media users minimize adverse reactions to misinformation encounters in their everyday lives.
arXiv Detail & Related papers (2022-07-26T00:40:26Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - An Agenda for Disinformation Research [3.083055913556838]
Disinformation erodes trust in the socio-political institutions that are the fundamental fabric of democracy.
The distribution of false, misleading, or inaccurate information with the intent to deceive is an existential threat to the United States.
New tools and approaches must be developed to leverage these affordances to understand and address this growing challenge.
arXiv Detail & Related papers (2020-12-15T19:32:36Z) - "Thought I'd Share First" and Other Conspiracy Theory Tweets from the
COVID-19 Infodemic: Exploratory Study [0.0]
Health-related misinformation threatens adherence to public health messaging.
Monitoring misinformation on social media is critical to understanding the evolution of ideas that have potentially negative public health impacts.
arXiv Detail & Related papers (2020-12-14T17:24:59Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Information Consumption and Social Response in a Segregated Environment:
the Case of Gab [74.5095691235917]
This work provides a characterization of the interaction patterns within Gab around the COVID-19 topic.
We find that there are no strong statistical differences in the social response to questionable and reliable content.
Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.
arXiv Detail & Related papers (2020-06-03T11:34:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.