ScamChatBot: An End-to-End Analysis of Fake Account Recovery on Social Media via Chatbots
- URL: http://arxiv.org/abs/2412.15072v1
- Date: Thu, 19 Dec 2024 17:22:35 GMT
- Title: ScamChatBot: An End-to-End Analysis of Fake Account Recovery on Social Media via Chatbots
- Authors: Bhupendra Acharya, Dominik Sautter, Muhammad Saad, Thorsten Holz,
- Abstract summary: This study focuses on scammers engaging in fake technical support to target users who are having problems recovering their accounts.
The main contribution of our work is the development of an automated system that interacts with scammers.
Our results show that scammers employ many social media profiles asking users to contact them via a few communication channels.
This automated approach highlights how scammers use a variety of strategies, including role-playing, to trick victims into disclosing personal or financial information.
- Score: 18.10200822118935
- License:
- Abstract: Social media platforms have become the hubs for various user interactions covering a wide range of needs, including technical support and services related to brands, products, or user accounts. Unfortunately, there has been a recent surge in scammers impersonating official services and providing fake technical support to users through these platforms. In this study, we focus on scammers engaging in such fake technical support to target users who are having problems recovering their accounts. More specifically, we focus on users encountering access problems with social media profiles (e.g., on platforms such as Facebook, Instagram, Gmail, and X) and cryptocurrency wallets. The main contribution of our work is the development of an automated system that interacts with scammers via a chatbot that mimics different personas. By initiating decoy interactions (e.g., through deceptive tweets), we have enticed scammers to interact with our system so that we can analyze their modus operandi. Our results show that scammers employ many social media profiles asking users to contact them via a few communication channels. Using a large language model (LLM), our chatbot had conversations with 450 scammers and provided valuable insights into their tactics and, most importantly, their payment profiles. This automated approach highlights how scammers use a variety of strategies, including role-playing, to trick victims into disclosing personal or financial information. With this study, we lay the foundation for using automated chat-based interactions with scammers to detect and study fraudulent activities at scale in an automated way.
Related papers
- RICoTA: Red-teaming of In-the-wild Conversation with Test Attempts [6.0385743836962025]
RICoTA is a Korean red teaming dataset that consists of 609 prompts challenging large language models (LLMs)
We utilize user-chatbot conversations that were self-posted on a Korean Reddit-like community.
Our dataset will be made publicly available via GitHub.
arXiv Detail & Related papers (2025-01-29T15:32:27Z) - Pirates of Charity: Exploring Donation-based Abuses in Social Media Platforms [15.45960607413968]
We conduct a large-scale analysis of donation-based scams on social media platforms.
We identified 832 scammers using various techniques to deceive users into making fraudulent donations.
Our study highlights significant weaknesses in social media platforms' ability to protect users from fraudulent donations.
arXiv Detail & Related papers (2024-12-20T07:26:43Z) - WildChat: 1M ChatGPT Interaction Logs in the Wild [88.05964311416717]
WildChat is a corpus of 1 million user-ChatGPT conversations, which consists of over 2.5 million interaction turns.
In addition to timestamped chat transcripts, we enrich the dataset with demographic data, including state, country, and hashed IP addresses.
arXiv Detail & Related papers (2024-05-02T17:00:02Z) - Conning the Crypto Conman: End-to-End Analysis of Cryptocurrency-based Technical Support Scams [19.802676243375615]
There is an increase in an emerging fraud trend called cryptocurrency-based technical support scam.
We present an analysis apparatus called HoneyTweet to analyze this kind of scam.
arXiv Detail & Related papers (2024-01-18T09:31:45Z) - Unsupervised detection of coordinated fake-follower campaigns on social
media [1.3035246321276739]
We present a novel unsupervised detection method designed to target a specific category of malicious accounts.
Our framework identifies anomalous following patterns among all the followers of a social media account.
We find that these detected groups of anomalous followers exhibit consistent behavior across multiple accounts.
arXiv Detail & Related papers (2023-10-31T12:30:29Z) - Active Countermeasures for Email Fraud [2.6856688022781556]
Scam-baiters play the roles of victims, reply to scammers, and try to waste their time and attention with long and unproductive conversations.
We developed and deployed an expandable scam-baiting mailserver that can conduct scam-baiting activities automatically.
arXiv Detail & Related papers (2022-10-26T21:20:13Z) - Neural Generation Meets Real People: Building a Social, Informative
Open-Domain Dialogue Agent [65.68144111226626]
Chirpy Cardinal aims to be both informative and conversational.
We let both the user and bot take turns driving the conversation.
Chirpy Cardinal placed second out of nine bots in the Alexa Prize Socialbot Grand Challenge.
arXiv Detail & Related papers (2022-07-25T09:57:23Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - Uncovering the Dark Side of Telegram: Fakes, Clones, Scams, and
Conspiracy Movements [67.39353554498636]
We perform a large-scale analysis of Telegram by collecting 35,382 different channels and over 130,000,000 messages.
We find some of the infamous activities also present on privacy-preserving services of the Dark Web, such as carding.
We propose a machine learning model that is able to identify fake channels with an accuracy of 86%.
arXiv Detail & Related papers (2021-11-26T14:53:31Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - CASS: Towards Building a Social-Support Chatbot for Online Health
Community [67.45813419121603]
The CASS architecture is based on advanced neural network algorithms.
It can handle new inputs from users and generate a variety of responses to them.
With a follow-up field experiment, CASS is proven useful in supporting individual members who seek emotional support.
arXiv Detail & Related papers (2021-01-04T05:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.