Characterizing the roles of bots during the COVID-19 infodemic on
Twitter
- URL: http://arxiv.org/abs/2011.06249v4
- Date: Thu, 19 Aug 2021 04:58:00 GMT
- Title: Characterizing the roles of bots during the COVID-19 infodemic on
Twitter
- Authors: Wentao Xu, Kazutoshi Sasahara
- Abstract summary: An infodemic is an emerging phenomenon caused by an overabundance of information online.
We examined the roles of bots in the case of the COVID-19 infodemic and the diffusion of non-credible information.
- Score: 1.776746672434207
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An infodemic is an emerging phenomenon caused by an overabundance of
information online. This proliferation of information makes it difficult for
the public to distinguish trustworthy news and credible information from
untrustworthy sites and non-credible sources. The perils of an infodemic
debuted with the outbreak of the COVID-19 pandemic and bots (i.e., automated
accounts controlled by a set of algorithms) that are suspected of spreading the
infodemic. Although previous research has revealed that bots played a central
role in spreading misinformation during major political events, how bots
behaved during the infodemic is unclear. In this paper, we examined the roles
of bots in the case of the COVID-19 infodemic and the diffusion of non-credible
information such as "5G" and "Bill Gates" conspiracy theories and content
related to "Trump" and "WHO" by analyzing retweet networks and retweeted items.
We show the segregated topology of their retweet networks, which indicates that
right-wing self-media accounts and conspiracy theorists may lead to this
opinion cleavage, while malicious bots might favor amplification of the
diffusion of non-credible information. Although the basic influence of
information diffusion could be larger in human users than bots, the effects of
bots are non-negligible under an infodemic situation.
Related papers
- An Exploratory Analysis of COVID Bot vs Human Disinformation
Dissemination stemming from the Disinformation Dozen on Telegram [5.494111035517598]
The COVID-19 pandemic of 2021 led to a worldwide health crisis that was accompanied by an infodemic.
A group of 12 social media personalities, dubbed the Disinformation Dozen", were identified as key in spreading disinformation regarding the COVID-19 virus, treatments, and vaccines.
This study focuses on the spread of disinformation propagated by this group on Telegram, a mobile messaging and social media platform.
arXiv Detail & Related papers (2024-02-22T01:10:11Z) - Machine Learning-based Automatic Annotation and Detection of COVID-19
Fake News [8.020736472947581]
COVID-19 impacted every part of the world, although the misinformation about the outbreak traveled faster than the virus.
Existing work neglects the presence of bots that act as a catalyst in the spread.
We propose an automated approach for labeling data using verified fact-checked statements on a Twitter dataset.
arXiv Detail & Related papers (2022-09-07T13:55:59Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - "COVID-19 was a FIFA conspiracy #curropt": An Investigation into the
Viral Spread of COVID-19 Misinformation [60.268682953952506]
We estimate the extent to which misinformation has influenced the course of the COVID-19 pandemic using natural language processing models.
We provide a strategy to combat social media posts that are likely to cause widespread harm.
arXiv Detail & Related papers (2022-06-12T19:41:01Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - The State of Infodemic on Twitter [0.0]
Social media posts and platforms are at risk of rumors and misinformation in the face of the serious uncertainty surrounding the virus itself.
We have presented an exploratory analysis of the tweets and the users who are involved in spreading misinformation.
We then delved into machine learning models and natural language processing techniques to identify if a tweet contains misinformation.
arXiv Detail & Related papers (2021-05-17T10:58:35Z) - Misinfo Belief Frames: A Case Study on Covid & Climate News [49.979419711713795]
We propose a formalism for understanding how readers perceive the reliability of news and the impact of misinformation.
We introduce the Misinfo Belief Frames (MBF) corpus, a dataset of 66k inferences over 23.5k headlines.
Our results using large-scale language modeling to predict misinformation frames show that machine-generated inferences can influence readers' trust in news headlines.
arXiv Detail & Related papers (2021-04-18T09:50:11Z) - The Role of the Crowd in Countering Misinformation: A Case Study of the
COVID-19 Infodemic [15.885290526721544]
We focus on tweets related to the COVID-19 pandemic, analyzing the spread of misinformation, professional fact checks, and the crowd response to popular misleading claims about COVID-19.
We train a classifier to create a novel dataset of 155,468 COVID-19-related tweets, containing 33,237 false claims and 33,413 refuting arguments.
We observe that the surge in misinformation tweets results in a quick response and a corresponding increase in tweets that refute such misinformation.
arXiv Detail & Related papers (2020-11-11T13:48:44Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Prevalence of Low-Credibility Information on Twitter During the COVID-19
Outbreak [5.203919289609101]
We estimate the prevalence of links to low-credibility information on Twitter during the outbreak.
We find that the combined volume of tweets linking to low-credibility information is comparable to the volume of New York Times articles and CDC links.
Social bots are involved in both posting and amplifying low-credibility information, although the majority of volume is generated by likely humans.
arXiv Detail & Related papers (2020-04-29T21:08:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.