Reinforcement Learning-based Counter-Misinformation Response Generation:
A Case Study of COVID-19 Vaccine Misinformation
- URL: http://arxiv.org/abs/2303.06433v1
- Date: Sat, 11 Mar 2023 15:55:01 GMT
- Title: Reinforcement Learning-based Counter-Misinformation Response Generation:
A Case Study of COVID-19 Vaccine Misinformation
- Authors: Bing He, Mustaque Ahamad, Srijan Kumar
- Abstract summary: Non-expert ordinary users act as eyes-on-the-ground who proactively counter misinformation.
In this work, we create two novel datasets of misinformation and counter-misinformation response pairs.
We propose MisinfoCorrect, a reinforcement learning-based framework that learns to generate counter-misinformation responses.
- Score: 19.245814221211415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The spread of online misinformation threatens public health, democracy, and
the broader society. While professional fact-checkers form the first line of
defense by fact-checking popular false claims, they do not engage directly in
conversations with misinformation spreaders. On the other hand, non-expert
ordinary users act as eyes-on-the-ground who proactively counter misinformation
-- recent research has shown that 96% counter-misinformation responses are made
by ordinary users. However, research also found that 2/3 times, these responses
are rude and lack evidence. This work seeks to create a counter-misinformation
response generation model to empower users to effectively correct
misinformation. This objective is challenging due to the absence of datasets
containing ground-truth of ideal counter-misinformation responses, and the lack
of models that can generate responses backed by communication theories. In this
work, we create two novel datasets of misinformation and counter-misinformation
response pairs from in-the-wild social media and crowdsourcing from
college-educated students. We annotate the collected data to distinguish poor
from ideal responses that are factual, polite, and refute misinformation. We
propose MisinfoCorrect, a reinforcement learning-based framework that learns to
generate counter-misinformation responses for an input misinformation post. The
model rewards the generator to increase the politeness, factuality, and
refutation attitude while retaining text fluency and relevancy. Quantitative
and qualitative evaluation shows that our model outperforms several baselines
by generating high-quality counter-responses. This work illustrates the promise
of generative text models for social good -- here, to help create a safe and
reliable information ecosystem. The code and data is accessible on
https://github.com/claws-lab/MisinfoCorrect.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Crowd Intelligence for Early Misinformation Prediction on Social Media [29.494819549803772]
We introduce CROWDSHIELD, a crowd intelligence-based method for early misinformation prediction.
We employ Q-learning to capture the two dimensions -- stances and claims.
We propose MIST, a manually annotated misinformation detection Twitter corpus.
arXiv Detail & Related papers (2024-08-08T13:45:23Z) - Missci: Reconstructing Fallacies in Misrepresented Science [84.32990746227385]
Health-related misinformation on social networks can lead to poor decision-making and real-world dangers.
Missci is a novel argumentation theoretical model for fallacious reasoning.
We present Missci as a dataset to test the critical reasoning abilities of large language models.
arXiv Detail & Related papers (2024-06-05T12:11:10Z) - Evidence-Driven Retrieval Augmented Response Generation for Online Misinformation [18.18205773056388]
We propose retrieval augmented response generation for online misinformation (RARG)
RARG collects supporting evidence from scientific sources and generates counter-misinformation responses based on the evidences.
We propose a reward function to maximize the utilization of the retrieved evidence while maintaining the quality of the generated text.
arXiv Detail & Related papers (2024-03-22T05:05:45Z) - Countering Misinformation via Emotional Response Generation [15.383062216223971]
proliferation of misinformation on social media platforms (SMPs) poses a significant danger to public health, social cohesion and democracy.
Previous research has shown how social correction can be an effective way to curb misinformation.
We present VerMouth, the first large-scale dataset comprising roughly 12 thousand claim-response pairs.
arXiv Detail & Related papers (2023-11-17T15:37:18Z) - Attacking Open-domain Question Answering by Injecting Misinformation [116.25434773461465]
We study the risk of misinformation to Question Answering (QA) models by investigating the sensitivity of open-domain QA models to misinformation documents.
Experiments show that QA models are vulnerable to even small amounts of evidence contamination brought by misinformation.
We discuss the necessity of building a misinformation-aware QA system that integrates question-answering and misinformation detection.
arXiv Detail & Related papers (2021-10-15T01:55:18Z) - Rome was built in 1776: A Case Study on Factual Correctness in
Knowledge-Grounded Response Generation [18.63673852470077]
We present a human annotation setup to identify three different response types.
We automatically create a new corpus called Conv-FEVER that is adapted from the Wizard of Wikipedia dataset.
arXiv Detail & Related papers (2021-10-11T17:48:11Z) - FaVIQ: FAct Verification from Information-seeking Questions [77.7067957445298]
We construct a large-scale fact verification dataset called FaVIQ using information-seeking questions posed by real users.
Our claims are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification.
arXiv Detail & Related papers (2021-07-05T17:31:44Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - Misinformation Has High Perplexity [55.47422012881148]
We propose to leverage the perplexity to debunk false claims in an unsupervised manner.
First, we extract reliable evidence from scientific and news sources according to sentence similarity to the claims.
Second, we prime a language model with the extracted evidence and finally evaluate the correctness of given claims based on the perplexity scores at debunking time.
arXiv Detail & Related papers (2020-06-08T15:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.