Defending Democracy: Using Deep Learning to Identify and Prevent
Misinformation
- URL: http://arxiv.org/abs/2106.02607v1
- Date: Thu, 3 Jun 2021 16:34:54 GMT
- Title: Defending Democracy: Using Deep Learning to Identify and Prevent
Misinformation
- Authors: Anusua Trivedi, Alyssa Suhm, Prathamesh Mahankal, Subhiksha
Mukuntharaj, Meghana D. Parab, Malvika Mohan, Meredith Berger, Arathi
Sethumadhavan, Ashish Jaiman, Rahul Dodhia
- Abstract summary: This study classifies and visualizes the spread of misinformation on a social media network using publicly available Twitter data.
The study further demonstrates the suitability of BERT for providing a scalable model for false information detection.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rise in online misinformation in recent years threatens democracies by
distorting authentic public discourse and causing confusion, fear, and even, in
extreme cases, violence. There is a need to understand the spread of false
content through online networks for developing interventions that disrupt
misinformation before it achieves virality. Using a Deep Bidirectional
Transformer for Language Understanding (BERT) and propagation graphs, this
study classifies and visualizes the spread of misinformation on a social media
network using publicly available Twitter data. The results confirm prior
research around user clusters and the virality of false content while improving
the precision of deep learning models for misinformation detection. The study
further demonstrates the suitability of BERT for providing a scalable model for
false information detection, which can contribute to the development of more
timely and accurate interventions to slow the spread of misinformation in
online environments.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - AMMeBa: A Large-Scale Survey and Dataset of Media-Based Misinformation In-The-Wild [1.4193873432298625]
We show the results of a two-year study using human raters to annotate online media-based misinformation.
We show the rise of generative AI-based content in misinformation claims.
We also show "simple" methods dominated historically, particularly context manipulations.
arXiv Detail & Related papers (2024-05-19T23:05:53Z) - Harnessing the Power of Text-image Contrastive Models for Automatic
Detection of Online Misinformation [50.46219766161111]
We develop a self-learning model to explore the constrastive learning in the domain of misinformation identification.
Our model shows the superior performance of non-matched image-text pair detection when the training data is insufficient.
arXiv Detail & Related papers (2023-04-19T02:53:59Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - Rumor Detection with Self-supervised Learning on Texts and Social Graph [101.94546286960642]
We propose contrastive self-supervised learning on heterogeneous information sources, so as to reveal their relations and characterize rumors better.
We term this framework as Self-supervised Rumor Detection (SRD)
Extensive experiments on three real-world datasets validate the effectiveness of SRD for automatic rumor detection on social media.
arXiv Detail & Related papers (2022-04-19T12:10:03Z) - Misinformation Detection in Social Media Video Posts [0.4724825031148411]
Short-form video by social media platforms has become a critical challenge for social media providers.
We develop methods to detect misinformation in social media posts, exploiting modalities such as video and text.
We collect 160,000 video posts from Twitter, and leverage self-supervised learning to learn expressive representations of joint visual and textual data.
arXiv Detail & Related papers (2022-02-15T20:14:54Z) - SOK: Fake News Outbreak 2021: Can We Stop the Viral Spread? [5.64512235559998]
Social Networks' omnipresence and ease of use has revolutionized the generation and distribution of information in today's world.
Unlike traditional media channels, social networks facilitate faster and wider spread of disinformation and misinformation.
Viral spread of false information has serious implications on the behaviors, attitudes and beliefs of the public.
arXiv Detail & Related papers (2021-05-22T09:26:13Z) - Understanding Health Misinformation Transmission: An Interpretable Deep
Learning Approach to Manage Infodemics [6.08461198240039]
This study proposes a novel interpretable deep learning approach, Generative Adversarial Network based Piecewise Wide and Attention Deep Learning (GAN-PiWAD) to predict health misinformation transmission in social media.
We select features according to social exchange theory and evaluate GAN-PiWAD on 4,445 misinformation videos.
Our findings provide direct implications for social media platforms and policymakers to design proactive interventions to identify misinformation, control transmissions, and manage infodemics.
arXiv Detail & Related papers (2020-12-21T15:49:19Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z) - An Information Diffusion Approach to Rumor Propagation and
Identification on Twitter [0.0]
We study the dynamics of microscopic-level misinformation spread on Twitter.
Our findings confirm that rumor cascades run deeper and that rumor masked as news, and messages that incite fear, will diffuse faster than other messages.
arXiv Detail & Related papers (2020-02-24T20:04:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.