BotNet Detection On Social Media
- URL: http://arxiv.org/abs/2110.05661v1
- Date: Tue, 12 Oct 2021 00:38:51 GMT
- Title: BotNet Detection On Social Media
- Authors: Aniket Chandrakant Devle, Julia Ann Jose, Abhay Shrinivas
Saraswathula, Shubham Mehta, Siddhant Srivastava, Sirisha Kona, Sudheera
Daggumalli
- Abstract summary: Social media has become an open playground for user (bot) accounts trying to manipulate other users using these platforms.
There has been evidence of bots manipulating the election results which can be a great threat to the whole nation and hence the whole world.
Our goal is to leverage semantic web mining techniques to identify fake bots or accounts involved in these activities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Given the popularity of social media and the notion of it being a platform
encouraging free speech, it has become an open playground for user (bot)
accounts trying to manipulate other users using these platforms. Social bots
not only learn human conversations, manners, and presence but also manipulate
public opinion, act as scammers, manipulate stock markets, etc. There has been
evidence of bots manipulating the election results which can be a great threat
to the whole nation and hence the whole world. So identification and prevention
of such campaigns that release or create the bots have become critical to
tackling it at its source of origin. Our goal is to leverage semantic web
mining techniques to identify fake bots or accounts involved in these
activities.
Related papers
- Social Media Bot Policies: Evaluating Passive and Active Enforcement [0.0]
Multimodal Foundation Models (MFMs) may facilitate malicious actors in the exploitation of online users.
We examined the bot and content policies of eight popular social media platforms: X (formerly Twitter), Instagram, Facebook, Threads, TikTok, Mastodon, Reddit, and LinkedIn.
Our findings indicate significant vulnerabilities within the current enforcement mechanisms of these platforms.
arXiv Detail & Related papers (2024-09-27T17:28:25Z) - Entendre, a Social Bot Detection Tool for Niche, Fringe, and Extreme Social Media [1.4913052010438639]
We introduce Entendre, an open-access, scalable, and platform-agnostic bot detection framework.
We exploit the idea that most social platforms share a generic template, where users can post content, approve content, and provide a bio.
To demonstrate Entendre's effectiveness, we used it to explore the presence of bots among accounts posting racist content on the now-defunct right-wing platform Parler.
arXiv Detail & Related papers (2024-08-13T13:50:49Z) - Sleeper Social Bots: a new generation of AI disinformation bots are already a political threat [0.0]
"Sleeper social bots" are AI-driven social bots created to spread disinformation and manipulate public opinion.
Preliminary findings suggest these bots can convincingly pass as human users, actively participate in conversations, and effectively disseminate disinformation.
The implications of our research point to the significant challenges posed by social bots in the upcoming 2024 U.S. presidential election and beyond.
arXiv Detail & Related papers (2024-08-07T19:57:10Z) - Unmasking Social Bots: How Confident Are We? [41.94295877935867]
We propose to address both bot detection and the quantification of uncertainty at the account level.
This dual focus is crucial as it allows us to leverage additional information related to the quantified uncertainty of each prediction.
Specifically, our approach facilitates targeted interventions for bots when predictions are made with high confidence and suggests caution (e.g., gathering more data) when predictions are uncertain.
arXiv Detail & Related papers (2024-07-18T22:33:52Z) - My Brother Helps Me: Node Injection Based Adversarial Attack on Social Bot Detection [69.99192868521564]
Social platforms such as Twitter are under siege from a multitude of fraudulent users.
Due to the structure of social networks, the majority of methods are based on the graph neural network(GNN), which is susceptible to attacks.
We propose a node injection-based adversarial attack method designed to deceive bot detection models.
arXiv Detail & Related papers (2023-10-11T03:09:48Z) - You are a Bot! -- Studying the Development of Bot Accusations on Twitter [1.7626250599622473]
In the absence of ground truth data, researchers may want to tap into the wisdom of the crowd.
Our research presents the first large-scale study of bot accusations on Twitter.
It shows how the term bot became an instrument of dehumanization in social media conversations.
arXiv Detail & Related papers (2023-02-01T16:09:11Z) - Investigating the Validity of Botometer-based Social Bot Studies [0.0]
Social bots are assumed to be automated social media accounts operated by malicious actors with the goal of manipulating public opinion.
Social bot activity has been reported in many different political contexts, including the U.S. presidential elections.
We point out a fundamental theoretical flaw in the widely-used study design for estimating the prevalence of social bots.
arXiv Detail & Related papers (2022-07-23T09:31:30Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - Uncovering the Dark Side of Telegram: Fakes, Clones, Scams, and
Conspiracy Movements [67.39353554498636]
We perform a large-scale analysis of Telegram by collecting 35,382 different channels and over 130,000,000 messages.
We find some of the infamous activities also present on privacy-preserving services of the Dark Web, such as carding.
We propose a machine learning model that is able to identify fake channels with an accuracy of 86%.
arXiv Detail & Related papers (2021-11-26T14:53:31Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Detection of Novel Social Bots by Ensembles of Specialized Classifiers [60.63582690037839]
Malicious actors create inauthentic social media accounts controlled in part by algorithms, known as social bots, to disseminate misinformation and agitate online discussion.
We show that different types of bots are characterized by different behavioral features.
We propose a new supervised learning method that trains classifiers specialized for each class of bots and combines their decisions through the maximum rule.
arXiv Detail & Related papers (2020-06-11T22:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.