What is a Social Media Bot? A Global Comparison of Bot and Human Characteristics
- URL: http://arxiv.org/abs/2501.00855v1
- Date: Wed, 01 Jan 2025 14:45:43 GMT
- Title: What is a Social Media Bot? A Global Comparison of Bot and Human Characteristics
- Authors: Lynnette Hui Xian Ng, Kathleen M. Carley,
- Abstract summary: Bots tend to use linguistic cues that can be easily automated while humans use cues that require dialogue understanding.
These conclusions are based on a large-scale analysis of social media tweets across 200mil users across 7 events.
- Score: 5.494111035517598
- License:
- Abstract: Chatter on social media is 20% bots and 80% humans. Chatter by bots and humans is consistently different: bots tend to use linguistic cues that can be easily automated while humans use cues that require dialogue understanding. Bots use words that match the identities they choose to present, while humans may send messages that are not related to the identities they present. Bots and humans differ in their communication structure: sampled bots have a star interaction structure, while sampled humans have a hierarchical structure. These conclusions are based on a large-scale analysis of social media tweets across ~200mil users across 7 events. Social media bots took the world by storm when social-cybersecurity researchers realized that social media users not only consisted of humans but also of artificial agents called bots. These bots wreck havoc online by spreading disinformation and manipulating narratives. Most research on bots are based on special-purposed definitions, mostly predicated on the event studied. This article first begins by asking, "What is a bot?", and we study the underlying principles of how bots are different from humans. We develop a first-principle definition of a social media bot. With this definition as a premise, we systematically compare characteristics between bots and humans across global events, and reflect on how the software-programmed bot is an Artificial Intelligent algorithm, and its potential for evolution as technology advances. Based on our results, we provide recommendations for the use and regulation of bots. Finally, we discuss open challenges and future directions: Detect, to systematically identify these automated and potentially evolving bots; Differentiate, to evaluate the goodness of the bot in terms of their content postings and relationship interactions; Disrupt, to moderate the impact of malicious bots.
Related papers
- Sleeper Social Bots: a new generation of AI disinformation bots are already a political threat [0.0]
"Sleeper social bots" are AI-driven social bots created to spread disinformation and manipulate public opinion.
Preliminary findings suggest these bots can convincingly pass as human users, actively participate in conversations, and effectively disseminate disinformation.
The implications of our research point to the significant challenges posed by social bots in the upcoming 2024 U.S. presidential election and beyond.
arXiv Detail & Related papers (2024-08-07T19:57:10Z) - Spot the bot: Coarse-Grained Partition of Semantic Paths for Bots and
Humans [55.2480439325792]
This paper focuses on comparing structures of the coarse-grained partitions of semantic paths for human-written and bot-generated texts.
As the semantic structure may be different for different languages, we investigate Russian, English, German, and Vietnamese languages.
arXiv Detail & Related papers (2024-02-27T10:38:37Z) - User-Like Bots for Cognitive Automation: A Survey [4.075971633195745]
Despite the hype, bots with human user-like cognition do not currently exist.
They lack situational awareness on the digital platforms where they operate.
We discuss how cognitive architectures can contribute to creating intelligent software bots.
arXiv Detail & Related papers (2023-11-20T20:16:24Z) - Bot or Human? Detecting ChatGPT Imposters with A Single Question [29.231261118782925]
Large language models (LLMs) have recently demonstrated impressive capabilities in natural language understanding and generation.
There is a concern that they can be misused for malicious purposes, such as fraud or denial-of-service attacks.
We propose a framework named FLAIR, Finding Large Language Model Authenticity via a Single Inquiry and Response, to detect conversational bots in an online manner.
arXiv Detail & Related papers (2023-05-10T19:09:24Z) - You are a Bot! -- Studying the Development of Bot Accusations on Twitter [1.7626250599622473]
In the absence of ground truth data, researchers may want to tap into the wisdom of the crowd.
Our research presents the first large-scale study of bot accusations on Twitter.
It shows how the term bot became an instrument of dehumanization in social media conversations.
arXiv Detail & Related papers (2023-02-01T16:09:11Z) - Neural Generation Meets Real People: Building a Social, Informative
Open-Domain Dialogue Agent [65.68144111226626]
Chirpy Cardinal aims to be both informative and conversational.
We let both the user and bot take turns driving the conversation.
Chirpy Cardinal placed second out of nine bots in the Alexa Prize Socialbot Grand Challenge.
arXiv Detail & Related papers (2022-07-25T09:57:23Z) - Investigating the Validity of Botometer-based Social Bot Studies [0.0]
Social bots are assumed to be automated social media accounts operated by malicious actors with the goal of manipulating public opinion.
Social bot activity has been reported in many different political contexts, including the U.S. presidential elections.
We point out a fundamental theoretical flaw in the widely-used study design for estimating the prevalence of social bots.
arXiv Detail & Related papers (2022-07-23T09:31:30Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - Detection of Novel Social Bots by Ensembles of Specialized Classifiers [60.63582690037839]
Malicious actors create inauthentic social media accounts controlled in part by algorithms, known as social bots, to disseminate misinformation and agitate online discussion.
We show that different types of bots are characterized by different behavioral features.
We propose a new supervised learning method that trains classifiers specialized for each class of bots and combines their decisions through the maximum rule.
arXiv Detail & Related papers (2020-06-11T22:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.