Should we agree to disagree about Twitter's bot problem?
- URL: http://arxiv.org/abs/2209.10006v2
- Date: Sat, 5 Nov 2022 22:21:14 GMT
- Title: Should we agree to disagree about Twitter's bot problem?
- Authors: Onur Varol
- Abstract summary: We argue how assumptions on bot-likely behavior, the detection approach, and the population inspected can affect the estimation of the percentage of bots on Twitter.
We emphasize the responsibility of platforms to be vigilant, transparent, and unbiased in dealing with threats that may affect their users.
- Score: 1.6317061277457
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Bots, simply defined as accounts controlled by automation, can be used as a
weapon for online manipulation and pose a threat to the health of platforms.
Researchers have studied online platforms to detect, estimate, and characterize
bot accounts. Concerns about the prevalence of bots were raised following Elon
Musk's bid to acquire Twitter. Twitter's recent estimate that 5\% of
monetizable daily active users being bot accounts raised questions about their
methodology. This estimate is based on a specific number of active users and
relies on Twitter's criteria for bot accounts. In this work, we want to stress
that crucial questions need to be answered in order to make a proper estimation
and compare different methodologies. We argue how assumptions on bot-likely
behavior, the detection approach, and the population inspected can affect the
estimation of the percentage of bots on Twitter. Finally, we emphasize the
responsibility of platforms to be vigilant, transparent, and unbiased in
dealing with threats that may affect their users.
Related papers
- Unmasking Social Bots: How Confident Are We? [41.94295877935867]
We propose to address both bot detection and the quantification of uncertainty at the account level.
This dual focus is crucial as it allows us to leverage additional information related to the quantified uncertainty of each prediction.
Specifically, our approach facilitates targeted interventions for bots when predictions are made with high confidence and suggests caution (e.g., gathering more data) when predictions are uncertain.
arXiv Detail & Related papers (2024-07-18T22:33:52Z) - My Brother Helps Me: Node Injection Based Adversarial Attack on Social Bot Detection [69.99192868521564]
Social platforms such as Twitter are under siege from a multitude of fraudulent users.
Due to the structure of social networks, the majority of methods are based on the graph neural network(GNN), which is susceptible to attacks.
We propose a node injection-based adversarial attack method designed to deceive bot detection models.
arXiv Detail & Related papers (2023-10-11T03:09:48Z) - BotArtist: Generic approach for bot detection in Twitter via semi-automatic machine learning pipeline [47.61306219245444]
Twitter has become a target for bots and fake accounts, resulting in the spread of false information and manipulation.
This paper introduces a semi-automatic machine learning pipeline (SAMLP) designed to address the challenges correlated with machine learning model development.
We develop a comprehensive bot detection model named BotArtist, based on user profile features.
arXiv Detail & Related papers (2023-05-31T09:12:35Z) - You are a Bot! -- Studying the Development of Bot Accusations on Twitter [1.7626250599622473]
In the absence of ground truth data, researchers may want to tap into the wisdom of the crowd.
Our research presents the first large-scale study of bot accusations on Twitter.
It shows how the term bot became an instrument of dehumanization in social media conversations.
arXiv Detail & Related papers (2023-02-01T16:09:11Z) - Investigating the Validity of Botometer-based Social Bot Studies [0.0]
Social bots are assumed to be automated social media accounts operated by malicious actors with the goal of manipulating public opinion.
Social bot activity has been reported in many different political contexts, including the U.S. presidential elections.
We point out a fundamental theoretical flaw in the widely-used study design for estimating the prevalence of social bots.
arXiv Detail & Related papers (2022-07-23T09:31:30Z) - Manipulating Twitter Through Deletions [64.33261764633504]
Research into influence campaigns on Twitter has mostly relied on identifying malicious activities from tweets obtained via public APIs.
Here, we provide the first exhaustive, large-scale analysis of anomalous deletion patterns involving more than a billion deletions by over 11 million accounts.
We find that a small fraction of accounts delete a large number of tweets daily.
First, limits on tweet volume are circumvented, allowing certain accounts to flood the network with over 26 thousand daily tweets.
Second, coordinated networks of accounts engage in repetitive likes and unlikes of content that is eventually deleted, which can manipulate ranking algorithms.
arXiv Detail & Related papers (2022-03-25T20:07:08Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - Characterizing Retweet Bots: The Case of Black Market Accounts [3.0254442724635173]
We characterize retweet bots that have been uncovered by purchasing retweets from the black market.
We detect whether they are fake or genuine accounts involved in inauthentic activities.
We also analyze their differences from human-controlled accounts.
arXiv Detail & Related papers (2021-12-04T15:52:46Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - BotSpot: Deep Learning Classification of Bot Accounts within Twitter [2.099922236065961]
The openness feature of Twitter allows programs to generate and control Twitter accounts automatically via the Twitter API.
These accounts, which are known as bots, can automatically perform actions such as tweeting, re-tweeting, following, unfollowing, or direct messaging other accounts.
We introduce a novel bot detection approach using deep learning, with the Multi-layer Perceptron Neural Networks and nine features of a bot account.
arXiv Detail & Related papers (2021-09-08T15:17:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.