Exposure to Social Engagement Metrics Increases Vulnerability to
Misinformation
- URL: http://arxiv.org/abs/2005.04682v2
- Date: Thu, 28 May 2020 07:31:05 GMT
- Title: Exposure to Social Engagement Metrics Increases Vulnerability to
Misinformation
- Authors: Mihai Avram, Nicholas Micallef, Sameer Patil, Filippo Menczer
- Abstract summary: We find that exposure to social engagement signals increases the vulnerability of users to misinformation.
To reduce the spread of misinformation, we call for technology platforms to rethink the display of social engagement metrics.
- Score: 12.737240668157424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: News feeds in virtually all social media platforms include engagement
metrics, such as the number of times each post is liked and shared. We find
that exposure to these social engagement signals increases the vulnerability of
users to misinformation. This finding has important implications for the design
of social media interactions in the misinformation age. To reduce the spread of
misinformation, we call for technology platforms to rethink the display of
social engagement metrics. Further research is needed to investigate whether
and how engagement metrics can be presented without amplifying the spread of
low-credibility information.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Easy-access online social media metrics can effectively identify misinformation sharing users [41.94295877935867]
We find that higher tweet frequency is positively associated with low factuality in shared content, while account age is negatively associated with it.
Our findings show that relying on these easy-access social network metrics could serve as a low-barrier approach for initial identification of users who are more likely to spread misinformation.
arXiv Detail & Related papers (2024-08-27T16:41:13Z) - Countering Misinformation via Emotional Response Generation [15.383062216223971]
proliferation of misinformation on social media platforms (SMPs) poses a significant danger to public health, social cohesion and democracy.
Previous research has shown how social correction can be an effective way to curb misinformation.
We present VerMouth, the first large-scale dataset comprising roughly 12 thousand claim-response pairs.
arXiv Detail & Related papers (2023-11-17T15:37:18Z) - MIDDAG: Where Does Our News Go? Investigating Information Diffusion via
Community-Level Information Pathways [114.42360191723469]
We present MIDDAG, an intuitive, interactive system that visualizes the information propagation paths on social media triggered by COVID-19-related news articles.
We construct communities among users and develop the propagation forecasting capability, enabling tracing and understanding of how information is disseminated at a higher level.
arXiv Detail & Related papers (2023-10-04T02:08:11Z) - Social Media Harms as a Trilemma: Asymmetry, Algorithms, and Audacious
Design Choices [0.0]
Social media has expanded in its use, and reach, since the inception of early social networks in the early 2000s.
We argue that as information (eco)systems, social media sites are vulnerable from three aspects.
We will unpack suggestions from various allied disciplines in untangling the 3A's above.
arXiv Detail & Related papers (2023-04-28T08:12:38Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - The Impact of Disinformation on a Controversial Debate on Social Media [1.299941371793082]
We study how pervasive is the presence of disinformation in the Italian debate around immigration on Twitter.
By characterising the Twitter users with an textitUntrustworthiness score, we are able to see that such bad information consumption habits are not equally distributed across the users.
arXiv Detail & Related papers (2021-06-30T10:29:07Z) - Analysing Social Media Network Data with R: Semi-Automated Screening of
Users, Comments and Communication Patterns [0.0]
Communication on social media platforms is increasingly widespread across societies.
Fake news, hate speech and radicalizing elements are part of this modern form of communication.
A basic understanding of these mechanisms and communication patterns could help to counteract negative forms of communication.
arXiv Detail & Related papers (2020-11-26T14:52:01Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.