Anatomy of an AI-powered malicious social botnet
- URL: http://arxiv.org/abs/2307.16336v1
- Date: Sun, 30 Jul 2023 23:06:06 GMT
- Title: Anatomy of an AI-powered malicious social botnet
- Authors: Kai-Cheng Yang and Filippo Menczer
- Abstract summary: This paper presents a study about a Twitter botnet that appears to employ ChatGPT to generate human-like content.
We identify 1,140 accounts and validate them via manual annotation.
ChatGPT-generated content promotes suspicious websites and spreads harmful comments.
- Score: 6.147741269183294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) exhibit impressive capabilities in generating
realistic text across diverse subjects. Concerns have been raised that they
could be utilized to produce fake content with a deceptive intention, although
evidence thus far remains anecdotal. This paper presents a case study about a
Twitter botnet that appears to employ ChatGPT to generate human-like content.
Through heuristics, we identify 1,140 accounts and validate them via manual
annotation. These accounts form a dense cluster of fake personas that exhibit
similar behaviors, including posting machine-generated content and stolen
images, and engage with each other through replies and retweets.
ChatGPT-generated content promotes suspicious websites and spreads harmful
comments. While the accounts in the AI botnet can be detected through their
coordination patterns, current state-of-the-art LLM content classifiers fail to
discriminate between them and human accounts in the wild. These findings
highlight the threats posed by AI-enabled social bots.
Related papers
- Evaluating and Mitigating IP Infringement in Visual Generative AI [54.24196167576133]
State-of-the-art visual generative models can generate content that bears a striking resemblance to characters protected by intellectual property rights.
This happens when the input prompt contains the character's name or even just descriptive details about their characteristics.
We develop a revised generation paradigm that can identify potentially infringing generated content and prevent IP infringement.
arXiv Detail & Related papers (2024-06-07T06:14:18Z) - Adversarial Botometer: Adversarial Analysis for Social Bot Detection [1.9280536006736573]
Social bots produce content that mimics human creativity.
Malicious social bots emerge to deceive people with their unrealistic content.
We evaluate the behavior of a text-based bot detector in a competitive environment.
arXiv Detail & Related papers (2024-05-03T11:28:21Z) - AbuseGPT: Abuse of Generative AI ChatBots to Create Smishing Campaigns [0.0]
We propose AbuseGPT method to show how the existing generative AI-based chatbots can be exploited by attackers in real world to create smishing texts.
We have found strong empirical evidences to show that attackers can exploit ethical standards in the existing generative AI-based chatbots services.
We also discuss some future research directions and guidelines to protect the abuse of generative AI-based services.
arXiv Detail & Related papers (2024-02-15T05:49:22Z) - Understanding writing style in social media with a supervised
contrastively pre-trained transformer [57.48690310135374]
Online Social Networks serve as fertile ground for harmful behavior, ranging from hate speech to the dissemination of disinformation.
We introduce the Style Transformer for Authorship Representations (STAR), trained on a large corpus derived from public sources of 4.5 x 106 authored texts.
Using a support base of 8 documents of 512 tokens, we can discern authors from sets of up to 1616 authors with at least 80% accuracy.
arXiv Detail & Related papers (2023-10-17T09:01:17Z) - From Online Behaviours to Images: A Novel Approach to Social Bot
Detection [0.3867363075280544]
A particular type of social accounts is known to promote unreputable content, hyperpartisan, and propagandistic information.
We propose a novel approach to bot detection: we first propose a new algorithm that transforms the sequence of actions that an account performs into an image.
We compare our performances with state-of-the-art results for bot detection on genuine accounts / bot accounts datasets well known in the literature.
arXiv Detail & Related papers (2023-04-15T11:36:50Z) - Can AI-Generated Text be Reliably Detected? [54.670136179857344]
Unregulated use of LLMs can potentially lead to malicious consequences such as plagiarism, generating fake news, spamming, etc.
Recent works attempt to tackle this problem either using certain model signatures present in the generated text outputs or by applying watermarking techniques.
In this paper, we show that these detectors are not reliable in practical scenarios.
arXiv Detail & Related papers (2023-03-17T17:53:19Z) - Verifying the Robustness of Automatic Credibility Assessment [79.08422736721764]
Text classification methods have been widely investigated as a way to detect content of low credibility.
In some cases insignificant changes in input text can mislead the models.
We introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - TweepFake: about Detecting Deepfake Tweets [3.3482093430607254]
Deep neural models can generate coherent, non-trivial and human-like text samples.
Social bots can write plausible deepfake messages, hoping to contaminate public debate.
We collect the first dataset of real deepfake tweets, TweepFake.
arXiv Detail & Related papers (2020-07-31T19:01:13Z) - Detection of Novel Social Bots by Ensembles of Specialized Classifiers [60.63582690037839]
Malicious actors create inauthentic social media accounts controlled in part by algorithms, known as social bots, to disseminate misinformation and agitate online discussion.
We show that different types of bots are characterized by different behavioral features.
We propose a new supervised learning method that trains classifiers specialized for each class of bots and combines their decisions through the maximum rule.
arXiv Detail & Related papers (2020-06-11T22:59:59Z) - Twitter Bot Detection Using Bidirectional Long Short-term Memory Neural
Networks and Word Embeddings [6.09170287691728]
This paper develops a recurrent neural model with word embeddings to distinguish Twitter bots from human accounts.
Experiments show that our approach can achieve competitive performance compared with existing state-of-the-art bot detection systems.
arXiv Detail & Related papers (2020-02-03T17:07:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.