Susceptibility to Unreliable Information Sources: Swift Adoption with
Minimal Exposure
- URL: http://arxiv.org/abs/2311.05724v1
- Date: Thu, 9 Nov 2023 20:16:06 GMT
- Title: Susceptibility to Unreliable Information Sources: Swift Adoption with
Minimal Exposure
- Authors: Jinyi Ye, Luca Luceri, Julie Jiang, Emilio Ferrara
- Abstract summary: Users tend to adopt low-credibility sources with fewer exposures than high-credibility sources.
The adoption of information sources often mirrors users' prior exposure to sources with comparable credibility levels.
- Score: 10.288282142373976
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Misinformation proliferation on social media platforms is a pervasive threat
to the integrity of online public discourse. Genuine users, susceptible to
others' influence, often unknowingly engage with, endorse, and re-share
questionable pieces of information, collectively amplifying the spread of
misinformation. In this study, we introduce an empirical framework to
investigate users' susceptibility to influence when exposed to unreliable and
reliable information sources. Leveraging two datasets on political and public
health discussions on Twitter, we analyze the impact of exposure on the
adoption of information sources, examining how the reliability of the source
modulates this relationship. Our findings provide evidence that increased
exposure augments the likelihood of adoption. Users tend to adopt
low-credibility sources with fewer exposures than high-credibility sources, a
trend that persists even among non-partisan users. Furthermore, the number of
exposures needed for adoption varies based on the source credibility, with
extreme ends of the spectrum (very high or low credibility) requiring fewer
exposures for adoption. Additionally, we reveal that the adoption of
information sources often mirrors users' prior exposure to sources with
comparable credibility levels. Our research offers critical insights for
mitigating the endorsement of misinformation by vulnerable users, offering a
framework to study the dynamics of content exposure and adoption on social
media platforms.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - "I don't trust them": Exploring Perceptions of Fact-checking Entities for Flagging Online Misinformation [3.6754294738197264]
We conducted an online study with 655 US participants to explore user perceptions of eight categories of fact-checking entities across two misinformation topics.
Our results hint at the need for further exploring fact-checking entities that may be perceived as neutral, as well as the potential for incorporating multiple assessments in such labels.
arXiv Detail & Related papers (2024-10-01T17:01:09Z) - Who Checks the Checkers? Exploring Source Credibility in Twitter's Community Notes [0.03511246202322249]
The proliferation of misinformation on social media platforms has become a significant concern.
This study focuses on the specific feature of Twitter Community Notes, despite its potential role in crowd-sourced fact-checking.
We find that the majority of cited sources are news outlets that are left-leaning and are of high factuality, pointing to a potential bias in the platform's community fact-checking.
arXiv Detail & Related papers (2024-06-18T09:47:58Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z) - Characterizing User Susceptibility to COVID-19 Misinformation on Twitter [40.0762273487125]
This study attempts to answer it who constitutes the population vulnerable to the online misinformation in the pandemic.
We distinguish different types of users, ranging from social bots to humans with various level of engagement with COVID-related misinformation.
We then identify users' online features and situational predictors that correlate with their susceptibility to COVID-19 misinformation.
arXiv Detail & Related papers (2021-09-20T13:31:15Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Information Consumption and Social Response in a Segregated Environment:
the Case of Gab [74.5095691235917]
This work provides a characterization of the interaction patterns within Gab around the COVID-19 topic.
We find that there are no strong statistical differences in the social response to questionable and reliable content.
Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.
arXiv Detail & Related papers (2020-06-03T11:34:25Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z) - Information Credibility in the Social Web: Contexts, Approaches, and
Open Issues [2.2133187119466116]
Credibility, also referred as believability, is a quality perceived by individuals, who are not always able to discern, with their own cognitive capacities, genuine information from fake one.
Several approaches have been proposed to automatically assess credibility in social media.
arXiv Detail & Related papers (2020-01-26T15:42:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.