Misinformation Concierge: A Proof-of-Concept with Curated Twitter
Dataset on COVID-19 Vaccination
- URL: http://arxiv.org/abs/2309.00639v1
- Date: Fri, 25 Aug 2023 10:06:05 GMT
- Title: Misinformation Concierge: A Proof-of-Concept with Curated Twitter
Dataset on COVID-19 Vaccination
- Authors: Shakshi Sharma, Anwitaman Datta, Vigneshwaran Shankaran and Rajesh
Sharma
- Abstract summary: We demonstrate the Misinformation Concierge, a proof-of-concept that provides actionable intelligence on misinformation prevalent in social media.
It uses language processing and machine learning tools to identify subtopics of discourse and discern non-misleading posts.
It presents statistical reports for policy-makers to understand the big picture of prevalent misinformation in a timely manner.
- Score: 0.05461938536945722
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We demonstrate the Misinformation Concierge, a proof-of-concept that provides
actionable intelligence on misinformation prevalent in social media.
Specifically, it uses language processing and machine learning tools to
identify subtopics of discourse and discern non/misleading posts; presents
statistical reports for policy-makers to understand the big picture of
prevalent misinformation in a timely manner; and recommends rebuttal messages
for specific pieces of misinformation, identified from within the corpus of
data - providing means to intervene and counter misinformation promptly. The
Misinformation Concierge proof-of-concept using a curated dataset is accessible
at: https://demo-frontend-uy34.onrender.com/
Related papers
- Crowd Intelligence for Early Misinformation Prediction on Social Media [29.494819549803772]
We introduce CROWDSHIELD, a crowd intelligence-based method for early misinformation prediction.
We employ Q-learning to capture the two dimensions -- stances and claims.
We propose MIST, a manually annotated misinformation detection Twitter corpus.
arXiv Detail & Related papers (2024-08-08T13:45:23Z) - Fighting Fire with Fire: Adversarial Prompting to Generate a
Misinformation Detection Dataset [10.860133543817659]
We propose an LLM-based approach of creating silver-standard ground-truth datasets for identifying misinformation.
Specifically speaking, given a trusted news article, our proposed approach involves prompting LLMs to automatically generate a summarised version of the original article.
To investigate the usefulness of this dataset, we conduct a set of experiments where we train a range of supervised models for the task of misinformation detection.
arXiv Detail & Related papers (2024-01-09T10:38:13Z) - AMIR: Automated MisInformation Rebuttal -- A COVID-19 Vaccination Datasets based Recommendation System [0.05461938536945722]
This work explored how existing information obtained from social media can be harnessed to facilitate automated rebuttal of misinformation at scale.
It leverages two publicly available datasets, FaCov (fact-checked articles) and misleading (social media Twitter) data on COVID-19 Vaccination.
arXiv Detail & Related papers (2023-10-29T13:07:33Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Interpretable Detection of Out-of-Context Misinformation with Neural-Symbolic-Enhanced Large Multimodal Model [16.348950072491697]
Misinformation creators now more tend to use out-of- multimedia contents to deceive the public and fake news detection systems.
This new type of misinformation increases the difficulty of not only detection but also clarification, because every individual modality is close enough to true information.
In this paper we explore how to achieve interpretable cross-modal de-contextualization detection that simultaneously identifies the mismatched pairs and the cross-modal contradictions.
arXiv Detail & Related papers (2023-04-15T21:11:55Z) - Reinforcement Learning-based Counter-Misinformation Response Generation:
A Case Study of COVID-19 Vaccine Misinformation [19.245814221211415]
Non-expert ordinary users act as eyes-on-the-ground who proactively counter misinformation.
In this work, we create two novel datasets of misinformation and counter-misinformation response pairs.
We propose MisinfoCorrect, a reinforcement learning-based framework that learns to generate counter-misinformation responses.
arXiv Detail & Related papers (2023-03-11T15:55:01Z) - DISCO: Comprehensive and Explainable Disinformation Detection [71.5283511752544]
We propose a comprehensive and explainable disinformation detection framework called DISCO.
We demonstrate DISCO on a real-world fake news detection task with satisfactory detection accuracy and explanation.
We expect that our demo could pave the way for addressing the limitations of identification, comprehension, and explainability as a whole.
arXiv Detail & Related papers (2022-03-09T18:17:25Z) - Explainable Patterns: Going from Findings to Insights to Support Data
Analytics Democratization [60.18814584837969]
We present Explainable Patterns (ExPatt), a new framework to support lay users in exploring and creating data storytellings.
ExPatt automatically generates plausible explanations for observed or selected findings using an external (textual) source of information.
arXiv Detail & Related papers (2021-01-19T16:13:44Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Misinformation Has High Perplexity [55.47422012881148]
We propose to leverage the perplexity to debunk false claims in an unsupervised manner.
First, we extract reliable evidence from scientific and news sources according to sentence similarity to the claims.
Second, we prime a language model with the extracted evidence and finally evaluate the correctness of given claims based on the perplexity scores at debunking time.
arXiv Detail & Related papers (2020-06-08T15:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.