Investigating the Validity of Botometer-based Social Bot Studies
- URL: http://arxiv.org/abs/2207.11474v1
- Date: Sat, 23 Jul 2022 09:31:30 GMT
- Title: Investigating the Validity of Botometer-based Social Bot Studies
- Authors: Florian Gallwitz and Michael Kreil
- Abstract summary: Social bots are assumed to be automated social media accounts operated by malicious actors with the goal of manipulating public opinion.
Social bot activity has been reported in many different political contexts, including the U.S. presidential elections.
We point out a fundamental theoretical flaw in the widely-used study design for estimating the prevalence of social bots.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The idea that social media platforms like Twitter are inhabited by vast
numbers of social bots has become widely accepted in recent years. Social bots
are assumed to be automated social media accounts operated by malicious actors
with the goal of manipulating public opinion. They are credited with the
ability to produce content autonomously and to interact with human users.
Social bot activity has been reported in many different political contexts,
including the U.S. presidential elections, discussions about migration, climate
change, and COVID-19. However, the relevant publications either use crude and
questionable heuristics to discriminate between supposed social bots and humans
or -- in the vast majority of the cases -- fully rely on the output of
automatic bot detection tools, most commonly Botometer. In this paper, we point
out a fundamental theoretical flaw in the widely-used study design for
estimating the prevalence of social bots. Furthermore, we empirically
investigate the validity of peer-reviewed Botometer-based studies by closely
and systematically inspecting hundreds of accounts that had been counted as
social bots. We were unable to find a single social bot. Instead, we found
mostly accounts undoubtedly operated by human users, the vast majority of them
using Twitter in an inconspicuous and unremarkable fashion without the
slightest traces of automation. We conclude that studies claiming to
investigate the prevalence, properties, or influence of social bots based on
Botometer have, in reality, just investigated false positives and artifacts of
this approach.
Related papers
- Entendre, a Social Bot Detection Tool for Niche, Fringe, and Extreme Social Media [1.4913052010438639]
We introduce Entendre, an open-access, scalable, and platform-agnostic bot detection framework.
We exploit the idea that most social platforms share a generic template, where users can post content, approve content, and provide a bio.
To demonstrate Entendre's effectiveness, we used it to explore the presence of bots among accounts posting racist content on the now-defunct right-wing platform Parler.
arXiv Detail & Related papers (2024-08-13T13:50:49Z) - Unmasking Social Bots: How Confident Are We? [41.94295877935867]
We propose to address both bot detection and the quantification of uncertainty at the account level.
This dual focus is crucial as it allows us to leverage additional information related to the quantified uncertainty of each prediction.
Specifically, our approach facilitates targeted interventions for bots when predictions are made with high confidence and suggests caution (e.g., gathering more data) when predictions are uncertain.
arXiv Detail & Related papers (2024-07-18T22:33:52Z) - Adversarial Botometer: Adversarial Analysis for Social Bot Detection [1.9280536006736573]
Social bots produce content that mimics human creativity.
Malicious social bots emerge to deceive people with their unrealistic content.
We evaluate the behavior of a text-based bot detector in a competitive environment.
arXiv Detail & Related papers (2024-05-03T11:28:21Z) - My Brother Helps Me: Node Injection Based Adversarial Attack on Social Bot Detection [69.99192868521564]
Social platforms such as Twitter are under siege from a multitude of fraudulent users.
Due to the structure of social networks, the majority of methods are based on the graph neural network(GNN), which is susceptible to attacks.
We propose a node injection-based adversarial attack method designed to deceive bot detection models.
arXiv Detail & Related papers (2023-10-11T03:09:48Z) - You are a Bot! -- Studying the Development of Bot Accusations on Twitter [1.7626250599622473]
In the absence of ground truth data, researchers may want to tap into the wisdom of the crowd.
Our research presents the first large-scale study of bot accusations on Twitter.
It shows how the term bot became an instrument of dehumanization in social media conversations.
arXiv Detail & Related papers (2023-02-01T16:09:11Z) - Should we agree to disagree about Twitter's bot problem? [1.6317061277457]
We argue how assumptions on bot-likely behavior, the detection approach, and the population inspected can affect the estimation of the percentage of bots on Twitter.
We emphasize the responsibility of platforms to be vigilant, transparent, and unbiased in dealing with threats that may affect their users.
arXiv Detail & Related papers (2022-09-20T21:27:25Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - Characterizing Retweet Bots: The Case of Black Market Accounts [3.0254442724635173]
We characterize retweet bots that have been uncovered by purchasing retweets from the black market.
We detect whether they are fake or genuine accounts involved in inauthentic activities.
We also analyze their differences from human-controlled accounts.
arXiv Detail & Related papers (2021-12-04T15:52:46Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - A ground-truth dataset and classification model for detecting bots in
GitHub issue and PR comments [70.1864008701113]
Bots are used in Github repositories to automate repetitive activities that are part of the distributed software development process.
This paper proposes a ground-truth dataset, based on a manual analysis with high interrater agreement, of pull request and issue comments in 5,000 distinct Github accounts.
We propose an automated classification model to detect bots, taking as main features the number of empty and non-empty comments of each account, the number of comment patterns, and the inequality between comments within comment patterns.
arXiv Detail & Related papers (2020-10-07T09:30:52Z) - Detection of Novel Social Bots by Ensembles of Specialized Classifiers [60.63582690037839]
Malicious actors create inauthentic social media accounts controlled in part by algorithms, known as social bots, to disseminate misinformation and agitate online discussion.
We show that different types of bots are characterized by different behavioral features.
We propose a new supervised learning method that trains classifiers specialized for each class of bots and combines their decisions through the maximum rule.
arXiv Detail & Related papers (2020-06-11T22:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.