Case Study on Detecting COVID-19 Health-Related Misinformation in Social
Media
- URL: http://arxiv.org/abs/2106.06811v1
- Date: Sat, 12 Jun 2021 16:26:04 GMT
- Title: Case Study on Detecting COVID-19 Health-Related Misinformation in Social
Media
- Authors: Mir Mehedi A. Pritom, Rosana Montanez Rodriguez, Asad Ali Khan,
Sebastian A. Nugroho, Esra'a Alrashydah, Beatrice N. Ruiz, Anthony Rios
- Abstract summary: This paper presents a mechanism to detect COVID-19 health-related misinformation in social media.
We defined misinformation themes and associated keywords incorporated into the misinformation detection mechanism using applied machine learning techniques.
Our method shows promising results with at most 78% accuracy in classifying health-related misinformation versus true information.
- Score: 7.194177427819438
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: COVID-19 pandemic has generated what public health officials called an
infodemic of misinformation. As social distancing and stay-at-home orders came
into effect, many turned to social media for socializing. This increase in
social media usage has made it a prime vehicle for the spreading of
misinformation. This paper presents a mechanism to detect COVID-19
health-related misinformation in social media following an interdisciplinary
approach. Leveraging social psychology as a foundation and existing
misinformation frameworks, we defined misinformation themes and associated
keywords incorporated into the misinformation detection mechanism using applied
machine learning techniques. Next, using the Twitter dataset, we explored the
performance of the proposed methodology using multiple state-of-the-art machine
learning classifiers. Our method shows promising results with at most 78%
accuracy in classifying health-related misinformation versus true information
using uni-gram-based NLP feature generations from tweets and the Decision Tree
classifier. We also provide suggestions on alternatives for countering
misinformation and ethical consideration for the study.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - DisTrack: a new Tool for Semi-automatic Misinformation Tracking in Online Social Networks [46.38614083502535]
DisTrack is designed to combat the spread of misinformation through a combination of Natural Language Processing (NLP) Social Network Analysis (SNA) and graph visualization.
The tool is tailored to capture and analyze the dynamic nature of misinformation spread in digital environments.
arXiv Detail & Related papers (2024-08-01T15:17:33Z) - AMIR: Automated MisInformation Rebuttal -- A COVID-19 Vaccination Datasets based Recommendation System [0.05461938536945722]
This work explored how existing information obtained from social media can be harnessed to facilitate automated rebuttal of misinformation at scale.
It leverages two publicly available datasets, FaCov (fact-checked articles) and misleading (social media Twitter) data on COVID-19 Vaccination.
arXiv Detail & Related papers (2023-10-29T13:07:33Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Semantic Similarity Models for Depression Severity Estimation [53.72188878602294]
This paper presents an efficient semantic pipeline to study depression severity in individuals based on their social media writings.
We use test user sentences for producing semantic rankings over an index of representative training sentences corresponding to depressive symptoms and severity levels.
We evaluate our methods on two Reddit-based benchmarks, achieving 30% improvement over state of the art in terms of measuring depression severity.
arXiv Detail & Related papers (2022-11-14T18:47:26Z) - Combating Health Misinformation in Social Media: Characterization,
Detection, Intervention, and Open Issues [24.428582199602822]
The rise of various social media platforms also enables the proliferation of online misinformation.
Health misinformation in social media has become an emerging research direction that attracts increasing attention from researchers of different disciplines.
arXiv Detail & Related papers (2022-11-10T01:52:12Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - Categorising Fine-to-Coarse Grained Misinformation: An Empirical Study
of COVID-19 Infodemic [6.137022734902771]
We introduce a fine-grained annotated misinformation tweets dataset including social behaviours annotation.
The dataset not only allows social behaviours analysis but also suitable for both evidence-based or non-evidence-based misinformation classification task.
arXiv Detail & Related papers (2021-06-22T12:17:53Z) - Understanding Health Misinformation Transmission: An Interpretable Deep
Learning Approach to Manage Infodemics [6.08461198240039]
This study proposes a novel interpretable deep learning approach, Generative Adversarial Network based Piecewise Wide and Attention Deep Learning (GAN-PiWAD) to predict health misinformation transmission in social media.
We select features according to social exchange theory and evaluate GAN-PiWAD on 4,445 misinformation videos.
Our findings provide direct implications for social media platforms and policymakers to design proactive interventions to identify misinformation, control transmissions, and manage infodemics.
arXiv Detail & Related papers (2020-12-21T15:49:19Z) - Assessing the Severity of Health States based on Social Media Posts [62.52087340582502]
We propose a multiview learning framework that models both the textual content as well as contextual-information to assess the severity of the user's health state.
The diverse NLU views demonstrate its effectiveness on both the tasks and as well as on the individual disease to assess a user's health.
arXiv Detail & Related papers (2020-09-21T03:45:14Z) - Independent Component Analysis for Trustworthy Cyberspace during High
Impact Events: An Application to Covid-19 [4.629100947762816]
Social media has become an important communication channel during high impact events, such as the COVID-19 pandemic.
As misinformation in social media can rapidly spread, creating social unrest, curtailing the spread of misinformation during such events is a significant data challenge.
We propose a data-driven solution that is based on the ICA model, such that knowledge discovery and detection of misinformation are achieved jointly.
arXiv Detail & Related papers (2020-06-01T21:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.