You are a Bot! -- Studying the Development of Bot Accusations on Twitter
- URL: http://arxiv.org/abs/2302.00546v3
- Date: Sun, 31 Mar 2024 06:51:17 GMT
- Title: You are a Bot! -- Studying the Development of Bot Accusations on Twitter
- Authors: Dennis Assenmacher, Leon Fröhling, Claudia Wagner,
- Abstract summary: In the absence of ground truth data, researchers may want to tap into the wisdom of the crowd.
Our research presents the first large-scale study of bot accusations on Twitter.
It shows how the term bot became an instrument of dehumanization in social media conversations.
- Score: 1.7626250599622473
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The characterization and detection of bots with their presumed ability to manipulate society on social media platforms have been subject to many research endeavors over the last decade. In the absence of ground truth data (i.e., accounts that are labeled as bots by experts or self-declare their automated nature), researchers interested in the characterization and detection of bots may want to tap into the wisdom of the crowd. But how many people need to accuse another user as a bot before we can assume that the account is most likely automated? And more importantly, are bot accusations on social media at all a valid signal for the detection of bots? Our research presents the first large-scale study of bot accusations on Twitter and shows how the term bot became an instrument of dehumanization in social media conversations since it is predominantly used to deny the humanness of conversation partners. Consequently, bot accusations on social media should not be naively used as a signal to train or test bot detection models.
Related papers
- Sleeper Social Bots: a new generation of AI disinformation bots are already a political threat [0.0]
"Sleeper social bots" are AI-driven social bots created to spread disinformation and manipulate public opinion.
Preliminary findings suggest these bots can convincingly pass as human users, actively participate in conversations, and effectively disseminate disinformation.
The implications of our research point to the significant challenges posed by social bots in the upcoming 2024 U.S. presidential election and beyond.
arXiv Detail & Related papers (2024-08-07T19:57:10Z) - Unmasking Social Bots: How Confident Are We? [41.94295877935867]
We propose to address both bot detection and the quantification of uncertainty at the account level.
This dual focus is crucial as it allows us to leverage additional information related to the quantified uncertainty of each prediction.
Specifically, our approach facilitates targeted interventions for bots when predictions are made with high confidence and suggests caution (e.g., gathering more data) when predictions are uncertain.
arXiv Detail & Related papers (2024-07-18T22:33:52Z) - My Brother Helps Me: Node Injection Based Adversarial Attack on Social Bot Detection [69.99192868521564]
Social platforms such as Twitter are under siege from a multitude of fraudulent users.
Due to the structure of social networks, the majority of methods are based on the graph neural network(GNN), which is susceptible to attacks.
We propose a node injection-based adversarial attack method designed to deceive bot detection models.
arXiv Detail & Related papers (2023-10-11T03:09:48Z) - BotArtist: Generic approach for bot detection in Twitter via semi-automatic machine learning pipeline [47.61306219245444]
Twitter has become a target for bots and fake accounts, resulting in the spread of false information and manipulation.
This paper introduces a semi-automatic machine learning pipeline (SAMLP) designed to address the challenges correlated with machine learning model development.
We develop a comprehensive bot detection model named BotArtist, based on user profile features.
arXiv Detail & Related papers (2023-05-31T09:12:35Z) - Should we agree to disagree about Twitter's bot problem? [1.6317061277457]
We argue how assumptions on bot-likely behavior, the detection approach, and the population inspected can affect the estimation of the percentage of bots on Twitter.
We emphasize the responsibility of platforms to be vigilant, transparent, and unbiased in dealing with threats that may affect their users.
arXiv Detail & Related papers (2022-09-20T21:27:25Z) - Investigating the Validity of Botometer-based Social Bot Studies [0.0]
Social bots are assumed to be automated social media accounts operated by malicious actors with the goal of manipulating public opinion.
Social bot activity has been reported in many different political contexts, including the U.S. presidential elections.
We point out a fundamental theoretical flaw in the widely-used study design for estimating the prevalence of social bots.
arXiv Detail & Related papers (2022-07-23T09:31:30Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - Characterizing Retweet Bots: The Case of Black Market Accounts [3.0254442724635173]
We characterize retweet bots that have been uncovered by purchasing retweets from the black market.
We detect whether they are fake or genuine accounts involved in inauthentic activities.
We also analyze their differences from human-controlled accounts.
arXiv Detail & Related papers (2021-12-04T15:52:46Z) - BotNet Detection On Social Media [0.0]
Social media has become an open playground for user (bot) accounts trying to manipulate other users using these platforms.
There has been evidence of bots manipulating the election results which can be a great threat to the whole nation and hence the whole world.
Our goal is to leverage semantic web mining techniques to identify fake bots or accounts involved in these activities.
arXiv Detail & Related papers (2021-10-12T00:38:51Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Detection of Novel Social Bots by Ensembles of Specialized Classifiers [60.63582690037839]
Malicious actors create inauthentic social media accounts controlled in part by algorithms, known as social bots, to disseminate misinformation and agitate online discussion.
We show that different types of bots are characterized by different behavioral features.
We propose a new supervised learning method that trains classifiers specialized for each class of bots and combines their decisions through the maximum rule.
arXiv Detail & Related papers (2020-06-11T22:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.