Analysing the Extent of Misinformation in Cancer Related Tweets
- URL: http://arxiv.org/abs/2003.13657v3
- Date: Thu, 2 Apr 2020 16:32:15 GMT
- Title: Analysing the Extent of Misinformation in Cancer Related Tweets
- Authors: Rakesh Bal, Sayan Sinha, Swastika Dutta, Rishabh Joshi, Sayan Ghosh,
and Ritam Dutt
- Abstract summary: We present a dataset regarding tweets which talk specifically about cancer.
We propose an attention-based deep learning model for automated detection of misinformation along with its spread.
This analysis helps us gather relevant insights on various social aspects related to misinformed tweets.
- Score: 6.409065843327199
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Twitter has become one of the most sought after places to discuss a wide
variety of topics, including medically relevant issues such as cancer. This
helps spread awareness regarding the various causes, cures and prevention
methods of cancer. However, no proper analysis has been performed, which
discusses the validity of such claims. In this work, we aim to tackle the
misinformation spread in such platforms. We collect and present a dataset
regarding tweets which talk specifically about cancer and propose an
attention-based deep learning model for automated detection of misinformation
along with its spread. We then do a comparative analysis of the linguistic
variation in the text corresponding to misinformation and truth. This analysis
helps us gather relevant insights on various social aspects related to
misinformed tweets.
Related papers
- What Evidence Do Language Models Find Convincing? [94.90663008214918]
We build a dataset that pairs controversial queries with a series of real-world evidence documents that contain different facts.
We use this dataset to perform sensitivity and counterfactual analyses to explore which text features most affect LLM predictions.
Overall, we find that current models rely heavily on the relevance of a website to the query, while largely ignoring stylistic features that humans find important.
arXiv Detail & Related papers (2024-02-19T02:15:34Z) - CMA-R:Causal Mediation Analysis for Explaining Rumour Detection [33.47709912852258]
We apply causal mediation analysis to explain the decision-making process of neural models for rumour detection on Twitter.
We find that our approach CMA-R identifies salient tweets that explain model predictions and show strong agreement with human judgements for critical tweets determining the truthfulness of stories.
arXiv Detail & Related papers (2024-02-13T01:31:08Z) - Lost in Translation -- Multilingual Misinformation and its Evolution [52.07628580627591]
This paper investigates the prevalence and dynamics of multilingual misinformation through an analysis of over 250,000 unique fact-checks spanning 95 languages.
We find that while the majority of misinformation claims are only fact-checked once, 11.7%, corresponding to more than 21,000 claims, are checked multiple times.
Using fact-checks as a proxy for the spread of misinformation, we find 33% of repeated claims cross linguistic boundaries.
arXiv Detail & Related papers (2023-10-27T12:21:55Z) - YouTube COVID-19 Vaccine Misinformation on Twitter: Platform
Interactions and Moderation Blind Spots [0.0]
This study explores the relationship between Twitter and YouTube in spreading COVID-19 vaccine-related misinformation.
We observe that a preponderance of anti-vaccine messaging remains among users who previously shared suspect information.
arXiv Detail & Related papers (2022-08-27T12:55:58Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - Rumor Detection with Self-supervised Learning on Texts and Social Graph [101.94546286960642]
We propose contrastive self-supervised learning on heterogeneous information sources, so as to reveal their relations and characterize rumors better.
We term this framework as Self-supervised Rumor Detection (SRD)
Extensive experiments on three real-world datasets validate the effectiveness of SRD for automatic rumor detection on social media.
arXiv Detail & Related papers (2022-04-19T12:10:03Z) - ArCovidVac: Analyzing Arabic Tweets About COVID-19 Vaccination [7.594204373985492]
We release the first largest manually annotated Arabic tweet dataset, ArCovidVac, for the COVID-19 vaccination campaign.
The dataset is enriched with different layers of annotation, including, (i) Informativeness (more vs. less importance of the tweets); (ii) fine-grained tweet content types (e.g., advice, rumors, restriction, authenticate news/information); and (iii) stance towards vaccination.
arXiv Detail & Related papers (2022-01-17T16:19:21Z) - What goes on inside rumour and non-rumour tweets and their reactions: A
Psycholinguistic Analyses [58.75684238003408]
psycho-linguistics analyses of social media text are vital for drawing meaningful conclusions to mitigate misinformation.
This research contributes by performing an in-depth psycholinguistic analysis of rumours related to various kinds of events.
arXiv Detail & Related papers (2021-11-09T07:45:11Z) - Misleading the Covid-19 vaccination discourse on Twitter: An exploratory
study of infodemic around the pandemic [0.45593531937154413]
We collect a moderate-sized representative corpus of tweets (200,000 approx.) pertaining to Covid-19 vaccination over a period of seven months (September 2020 - March 2021)
Following a Transfer Learning approach, we utilize the pre-trained Transformer-based XLNet model to classify tweets as Misleading or Non-Misleading.
We build on this to study and contrast the characteristics of tweets in the corpus that are misleading in nature against non-misleading ones.
Several ML models are employed for prediction, with up to 90% accuracy, and the importance of each feature is explained using SHAP Explainable AI (X
arXiv Detail & Related papers (2021-08-16T17:02:18Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.