"It Warned Me Just at the Right Moment": Exploring LLM-based Real-time Detection of Phone Scams
- URL: http://arxiv.org/abs/2502.03964v1
- Date: Thu, 06 Feb 2025 10:57:05 GMT
- Title: "It Warned Me Just at the Right Moment": Exploring LLM-based Real-time Detection of Phone Scams
- Authors: Zitong Shen, Sineng Yan, Youqian Zhang, Xiapu Luo, Grace Ngai, Eugene Yujun Fu,
- Abstract summary: We propose a framework for modeling scam calls and introduce an LLM-based real-time detection approach.
We evaluate the method's performance and analyze key factors influencing its effectiveness.
- Score: 21.992539308179126
- License:
- Abstract: Despite living in the era of the internet, phone-based scams remain one of the most prevalent forms of scams. These scams aim to exploit victims for financial gain, causing both monetary losses and psychological distress. While governments, industries, and academia have actively introduced various countermeasures, scammers also continue to evolve their tactics, making phone scams a persistent threat. To combat these increasingly sophisticated scams, detection technologies must also advance. In this work, we propose a framework for modeling scam calls and introduce an LLM-based real-time detection approach, which assesses fraudulent intent in conversations, further providing immediate warnings to users to mitigate harm. Through experiments, we evaluate the method's performance and analyze key factors influencing its effectiveness. This analysis enables us to refine the method to improve precision while exploring the trade-off between recall and timeliness, paving the way for future directions in this critical area of research.
Related papers
- Adversarial Alignment for LLMs Requires Simpler, Reproducible, and More Measurable Objectives [52.863024096759816]
Misaligned research objectives have hindered progress in adversarial robustness research over the past decade.
We argue that realigned objectives are necessary for meaningful progress in adversarial alignment.
arXiv Detail & Related papers (2025-02-17T15:28:40Z) - Adversarial Reasoning at Jailbreaking Time [49.70772424278124]
We develop an adversarial reasoning approach to automatic jailbreaking via test-time computation.
Our approach introduces a new paradigm in understanding LLM vulnerabilities, laying the foundation for the development of more robust and trustworthy AI systems.
arXiv Detail & Related papers (2025-02-03T18:59:01Z) - Exposing LLM Vulnerabilities: Adversarial Scam Detection and Performance [16.9071617169937]
This paper investigates the vulnerabilities of Large Language Models (LLMs) when facing adversarial scam messages for the task of scam detection.
We created a comprehensive dataset with fine-grained labels of scam messages, including both original and adversarial scam messages.
Our analysis showed how adversarial examples took advantage of vulnerabilities of a LLM, leading to high misclassification rate.
arXiv Detail & Related papers (2024-12-01T00:13:28Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Application of AI-based Models for Online Fraud Detection and Analysis [1.764243259740255]
We conduct a Systematic Literature Review on AI and NLP techniques for online fraud detection.
We report the state-of-the-art NLP techniques for analysing various online fraud categories.
We identify issues in data limitations, training bias reporting, and selective presentation of metrics in model performance reporting.
arXiv Detail & Related papers (2024-09-25T14:47:03Z) - Combating Phone Scams with LLM-based Detection: Where Do We Stand? [1.8979188847659796]
This research explores the potential of large language models (LLMs) to provide detection of fraudulent phone calls.
LLMs-based detectors can identify potential scams as they occur, offering immediate protection to users.
arXiv Detail & Related papers (2024-09-18T02:14:30Z) - The Anatomy of Deception: Technical and Human Perspectives on a Large-scale Phishing Campaign [4.369550829556578]
This study takes an unprecedented deep dive into large-scale phishing campaigns aimed at Meta's users.
Analysing data from over 25,000 victims worldwide, we highlight the nuances of these campaigns.
Through the application of advanced computational techniques, including natural language processing and machine learning, this work unveils critical insights into the psyche of victims.
arXiv Detail & Related papers (2023-10-05T12:24:24Z) - Designing an attack-defense game: how to increase robustness of
financial transaction models via a competition [69.08339915577206]
Given the escalating risks of malicious attacks in the finance sector, understanding adversarial strategies and robust defense mechanisms for machine learning models is critical.
We aim to investigate the current state and dynamics of adversarial attacks and defenses for neural network models that use sequential financial data as the input.
We have designed a competition that allows realistic and detailed investigation of problems in modern financial transaction data.
The participants compete directly against each other, so possible attacks and defenses are examined in close-to-real-life conditions.
arXiv Detail & Related papers (2023-08-22T12:53:09Z) - Re-thinking Data Availablity Attacks Against Deep Neural Networks [53.64624167867274]
In this paper, we re-examine the concept of unlearnable examples and discern that the existing robust error-minimizing noise presents an inaccurate optimization objective.
We introduce a novel optimization paradigm that yields improved protection results with reduced computational time requirements.
arXiv Detail & Related papers (2023-05-18T04:03:51Z) - Understanding Underground Incentivized Review Services [26.402818153734035]
We study review fraud on e-commerce platforms through an HCI lens.
We uncover sophisticated recruitment, execution, and reporting mechanisms fraudsters use to scale their operation.
Countermeasures that crack down on communication channels through which these services operate are effective in combating incentivized reviews.
arXiv Detail & Related papers (2021-01-20T05:30:14Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.