CASE: An Agentic AI Framework for Enhancing Scam Intelligence in Digital Payments
- URL: http://arxiv.org/abs/2508.19932v1
- Date: Wed, 27 Aug 2025 14:47:33 GMT
- Title: CASE: An Agentic AI Framework for Enhancing Scam Intelligence in Digital Payments
- Authors: Nitish Jaipuria, Lorenzo Gatto, Zijun Kan, Shankey Poddar, Bill Cheung, Diksha Bansal, Ramanan Balakrishnan, Aviral Suri, Jose Estevez,
- Abstract summary: This paper presents CASE (Conversational Agent for Scam Elucidation), a novel Agentic AI framework.<n>A conversational agent is uniquely designed to proactively interview potential victims to elicit intelligence in the form of a detailed conversation.<n>By augmenting our existing features with this new intelligence, we have observed a 21% uplift in the volume of scam enforcements.
- Score: 0.24378845585726894
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The proliferation of digital payment platforms has transformed commerce, offering unmatched convenience and accessibility globally. However, this growth has also attracted malicious actors, leading to a corresponding increase in sophisticated social engineering scams. These scams are often initiated and orchestrated on multiple surfaces outside the payment platform, making user and transaction-based signals insufficient for a complete understanding of the scam's methodology and underlying patterns, without which it is very difficult to prevent it in a timely manner. This paper presents CASE (Conversational Agent for Scam Elucidation), a novel Agentic AI framework that addresses this problem by collecting and managing user scam feedback in a safe and scalable manner. A conversational agent is uniquely designed to proactively interview potential victims to elicit intelligence in the form of a detailed conversation. The conversation transcripts are then consumed by another AI system that extracts information and converts it into structured data for downstream usage in automated and manual enforcement mechanisms. Using Google's Gemini family of LLMs, we implemented this framework on Google Pay (GPay) India. By augmenting our existing features with this new intelligence, we have observed a 21% uplift in the volume of scam enforcements. The architecture and its robust evaluation framework are highly generalizable, offering a blueprint for building similar AI-driven systems to collect and manage scam intelligence in other sensitive domains.
Related papers
- Anansi: Scalable Characterization of Message-Based Job Scams [4.132349063771989]
Job-based smishing scams represent a rapidly growing and understudied threat within the broader landscape of online fraud.<n>Anansi is the first scalable, end-to-end measurement pipeline designed to systematically engage with, analyze, and characterize job scams in the wild.
arXiv Detail & Related papers (2026-02-27T17:49:56Z) - A Multi-Turn Framework for Evaluating AI Misuse in Fraud and Cybercrime Scenarios [1.1864532555108382]
It is unclear the extent to which current large language models can provide useful information for complex criminal activity.<n>We evaluate whether models provide actionable assistance beyond information typically available on the web, as assessed by domain experts.<n>We found that (1) current large language models provide minimal actionable information for fraud and cybercrime without the use of advanced jailbreaking techniques.
arXiv Detail & Related papers (2026-02-25T12:01:38Z) - When AI Agents Collude Online: Financial Fraud Risks by Collaborative LLM Agents on Social Platforms [101.2197679948061]
We study the risks of collective financial fraud in large-scale multi-agent systems powered by large language model (LLM) agents.<n>We present MultiAgentFraudBench, a large-scale benchmark for simulating financial fraud scenarios.
arXiv Detail & Related papers (2025-11-09T16:30:44Z) - EQ-Negotiator: Dynamic Emotional Personas Empower Small Language Models for Edge-Deployable Credit Negotiation [66.09161596959771]
Small language models (SLMs) offer a practical alternative, but suffer from a significant performance gap compared to large language models (LLMs)<n>This paper introduces EQ-Negotiator, a novel framework that bridges this capability gap using emotional personas.<n>We show that a 7B parameter language model with EQ-Negotiator achieves better debt recovery and negotiation efficiency than baseline LLMs more than 10 times its size.
arXiv Detail & Related papers (2025-11-05T11:25:07Z) - Send to which account? Evaluation of an LLM-based Scambaiting System [0.0]
This paper presents the first large-scale, real-world evaluation of a scambaiting system powered by large language models (LLMs)<n>Over a five-month deployment, the system initiated over 2,600 engagements with actual scammers, resulting in a dataset of more than 18,700 messages.<n>It achieved an Information Disclosure Rate (IDR) of approximately 32%, successfully extracting sensitive financial information such as mule accounts.
arXiv Detail & Related papers (2025-09-10T11:08:52Z) - Throttling Web Agents Using Reasoning Gates [24.00110215260136]
We design a framework that imposes tunable costs on agents before providing access to resources.<n>We introduce Reasoning Gates, synthetic text puzzles that require multi-hop reasoning over world knowledge.<n>Our framework achieves computational asymmetry, i.e., the response-generation cost is 9.2x higher than the generation cost for SOTA models.
arXiv Detail & Related papers (2025-09-01T16:56:16Z) - ScamAgents: How AI Agents Can Simulate Human-Level Scam Calls [0.0]
ScamAgent is an autonomous multi-turn agent built on top of Large Language Models (LLMs)<n>We show that ScamAgent maintains dialogue memory, adapts dynamically to simulated user responses, and employs deceptive persuasion strategies across conversational turns.<n>Our findings highlight an urgent need for multi-turn safety auditing, agent-level control frameworks, and new methods to detect and disrupt conversational deception powered by generative AI.
arXiv Detail & Related papers (2025-08-08T17:01:41Z) - Agentic Web: Weaving the Next Web with AI Agents [109.13815627467514]
The emergence of AI agents powered by large language models (LLMs) marks a pivotal shift toward the Agentic Web.<n>In this paradigm, agents interact directly with one another to plan, coordinate, and execute complex tasks on behalf of users.<n>We present a structured framework for understanding and building the Agentic Web.
arXiv Detail & Related papers (2025-07-28T17:58:12Z) - A Survey of LLM-Driven AI Agent Communication: Protocols, Security Risks, and Defense Countermeasures [59.43633341497526]
Large-Language-Model-driven AI agents have exhibited unprecedented intelligence and adaptability.<n>Agent communication is regarded as a foundational pillar of the future AI ecosystem.<n>This paper presents a comprehensive survey of agent communication security.
arXiv Detail & Related papers (2025-06-24T14:44:28Z) - Among Us: A Sandbox for Measuring and Detecting Agentic Deception [1.1893676124374688]
We introduce $textitAmong Us$, a social deception game where language-based agents exhibit long-term, open-ended deception.<n>We find that models trained with RL are comparatively much better at producing deception than detecting it.<n>We also find two SAE features that work well at deception detection but are unable to steer the model to lie less.
arXiv Detail & Related papers (2025-04-05T06:09:32Z) - Real AI Agents with Fake Memories: Fatal Context Manipulation Attacks on Web3 Agents [36.49717045080722]
This paper investigates the vulnerabilities of AI agents within blockchain-based financial ecosystems when exposed to adversarial threats in real-world scenarios.<n>We introduce the concept of context manipulation -- a comprehensive attack vector that exploits unprotected context surfaces.<n>Using ElizaOS, we showcase that malicious injections into prompts or historical records can trigger unauthorized asset transfers and protocol violations.
arXiv Detail & Related papers (2025-03-20T15:44:31Z) - BounTCHA: A CAPTCHA Utilizing Boundary Identification in Guided Generative AI-extended Videos [4.873950690073118]
Bots have increasingly been able to bypass most existing CAPTCHA systems, posing significant security threats to web applications.<n>We design and implement BounTCHA, a CAPTCHA mechanism that leverages human perception of boundaries in video transitions and disruptions.<n>We develop a prototype and conduct experiments to collect data on humans' time biases in boundary identification.
arXiv Detail & Related papers (2025-01-30T18:38:09Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models [54.19289900203071]
The rise in popularity of text-to-image generative artificial intelligence has attracted widespread public interest.
We demonstrate that this technology can be attacked to generate content that subtly manipulates its users.
We propose a Backdoor Attack on text-to-image Generative Models (BAGM)
Our attack is the first to target three popular text-to-image generative models across three stages of the generative process.
arXiv Detail & Related papers (2023-07-31T08:34:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.