Love, Lies, and Language Models: Investigating AI's Role in Romance-Baiting Scams
- URL: http://arxiv.org/abs/2512.16280v2
- Date: Mon, 22 Dec 2025 10:37:08 GMT
- Title: Love, Lies, and Language Models: Investigating AI's Role in Romance-Baiting Scams
- Authors: Gilad Gressel, Rahul Pankajakshan, Shir Rozenfeld, Ling Li, Ivan Franceschini, Krishnashree Achuthan, Yisroel Mirsky,
- Abstract summary: Romance-baiting scams are run by organized crime syndicates that traffic thousands of people into forced labor.<n>Because the scams are inherently text-based, they raise urgent questions about the role of Large Language Models (LLMs) in both current and future automation.
- Score: 4.75107240674109
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Romance-baiting scams have become a major source of financial and emotional harm worldwide. These operations are run by organized crime syndicates that traffic thousands of people into forced labor, requiring them to build emotional intimacy with victims over weeks of text conversations before pressuring them into fraudulent cryptocurrency investments. Because the scams are inherently text-based, they raise urgent questions about the role of Large Language Models (LLMs) in both current and future automation. We investigate this intersection by interviewing 145 insiders and 5 scam victims, performing a blinded long-term conversation study comparing LLM scam agents to human operators, and executing an evaluation of commercial safety filters. Our findings show that LLMs are already widely deployed within scam organizations, with 87% of scam labor consisting of systematized conversational tasks readily susceptible to automation. In a week-long study, an LLM agent not only elicited greater trust from study participants (p=0.007) but also achieved higher compliance with requests than human operators (46% vs. 18% for humans). Meanwhile, popular safety filters detected 0.0% of romance baiting dialogues. Together, these results suggest that romance-baiting scams may be amenable to full-scale LLM automation, while existing defenses remain inadequate to prevent their expansion.
Related papers
- When AI Agents Collude Online: Financial Fraud Risks by Collaborative LLM Agents on Social Platforms [101.2197679948061]
We study the risks of collective financial fraud in large-scale multi-agent systems powered by large language model (LLM) agents.<n>We present MultiAgentFraudBench, a large-scale benchmark for simulating financial fraud scenarios.
arXiv Detail & Related papers (2025-11-09T16:30:44Z) - Victim as a Service: Designing a System for Engaging with Interactive Scammers [29.43320237202651]
We describe the motivation, design, implementation, and experience with CHATTERBOX, an LLM-based system that automates long-term engagement with online scammers.<n>We describe the techniques we have developed to attract scam attempts, the system and LLM-engineering required to convincingly engage with scammers, and the necessary capabilities required to satisfy or evade "milestones" in scammers' workflow.
arXiv Detail & Related papers (2025-10-27T23:19:29Z) - Evaluating & Reducing Deceptive Dialogue From Language Models with Multi-turn RL [64.3268313484078]
Large Language Models (LLMs) interact with millions of people worldwide in applications such as customer support, education and healthcare.<n>Their ability to produce deceptive outputs, whether intentionally or inadvertently, poses significant safety concerns.<n>We investigate the extent to which LLMs engage in deception within dialogue, and propose the belief misalignment metric to quantify deception.
arXiv Detail & Related papers (2025-10-16T05:29:36Z) - ScamAgents: How AI Agents Can Simulate Human-Level Scam Calls [0.0]
ScamAgent is an autonomous multi-turn agent built on top of Large Language Models (LLMs)<n>We show that ScamAgent maintains dialogue memory, adapts dynamically to simulated user responses, and employs deceptive persuasion strategies across conversational turns.<n>Our findings highlight an urgent need for multi-turn safety auditing, agent-level control frameworks, and new methods to detect and disrupt conversational deception powered by generative AI.
arXiv Detail & Related papers (2025-08-08T17:01:41Z) - Promoting Online Safety by Simulating Unsafe Conversations with LLMs [1.7243216387069678]
Large language models (LLMs) have the potential -- and already are being used -- to increase the speed, scale, and types of unsafe conversations online.<n>In our current work, we explore ways to promote online safety by teaching people about unsafe conversations that can occur online with and without LLMs.
arXiv Detail & Related papers (2025-07-29T22:38:21Z) - Exposing LLM Vulnerabilities: Adversarial Scam Detection and Performance [16.9071617169937]
This paper investigates the vulnerabilities of Large Language Models (LLMs) when facing adversarial scam messages for the task of scam detection.<n>We created a comprehensive dataset with fine-grained labels of scam messages, including both original and adversarial scam messages.<n>Our analysis showed how adversarial examples took advantage of vulnerabilities of a LLM, leading to high misclassification rate.
arXiv Detail & Related papers (2024-12-01T00:13:28Z) - Combating Phone Scams with LLM-based Detection: Where Do We Stand? [1.8979188847659796]
This research explores the potential of large language models (LLMs) to provide detection of fraudulent phone calls.
LLMs-based detectors can identify potential scams as they occur, offering immediate protection to users.
arXiv Detail & Related papers (2024-09-18T02:14:30Z) - The Emerged Security and Privacy of LLM Agent: A Survey with Case Studies [58.94148083602662]
Large Language Models (LLMs) agents have evolved to perform complex tasks.<n>The widespread applications of LLM agents demonstrate their significant commercial value.<n>However, they also expose security and privacy vulnerabilities.<n>This survey aims to provide a comprehensive overview of the newly emerged privacy and security issues faced by LLM agents.
arXiv Detail & Related papers (2024-07-28T00:26:24Z) - The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative [55.08395463562242]
Multimodal Large Language Models (MLLMs) are constantly defining the new boundary of Artificial General Intelligence (AGI)
Our paper explores a novel vulnerability in MLLM societies - the indirect propagation of malicious content.
arXiv Detail & Related papers (2024-02-20T23:08:21Z) - How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to
Challenge AI Safety by Humanizing LLMs [66.05593434288625]
This paper introduces a new perspective to jailbreak large language models (LLMs) as human-like communicators.
We apply a persuasion taxonomy derived from decades of social science research to generate persuasive adversarial prompts (PAP) to jailbreak LLMs.
PAP consistently achieves an attack success rate of over $92%$ on Llama 2-7b Chat, GPT-3.5, and GPT-4 in $10$ trials.
On the defense side, we explore various mechanisms against PAP and, found a significant gap in existing defenses.
arXiv Detail & Related papers (2024-01-12T16:13:24Z) - Harnessing the Power of LLMs: Evaluating Human-AI Text Co-Creation
through the Lens of News Headline Generation [58.31430028519306]
This study explores how humans can best leverage LLMs for writing and how interacting with these models affects feelings of ownership and trust in the writing process.
While LLMs alone can generate satisfactory news headlines, on average, human control is needed to fix undesirable model outputs.
arXiv Detail & Related papers (2023-10-16T15:11:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.