PsyScam: A Benchmark for Psychological Techniques in Real-World Scams
- URL: http://arxiv.org/abs/2505.15017v1
- Date: Wed, 21 May 2025 01:55:04 GMT
- Title: PsyScam: A Benchmark for Psychological Techniques in Real-World Scams
- Authors: Shang Ma, Tianyi Ma, Jiahao Liu, Wei Song, Zhenkai Liang, Xusheng Xiao, Yanfang Ye,
- Abstract summary: PsyScam is a benchmark designed to systematically capture and evaluate psychological techniques used by online scammers.<n>PsyScam bridges psychology and real-world cyber security analysis through collecting a wide range of scam reports.<n> Experimental results show that PsyScam presents significant challenges to existing models in both detecting and generating scam content.
- Score: 25.92268107663186
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Online scams have become increasingly prevalent, with scammers using psychological techniques (PTs) to manipulate victims. While existing research has developed benchmarks to study scammer behaviors, these benchmarks do not adequately reflect the PTs observed in real-world scams. To fill this gap, we introduce PsyScam, a benchmark designed to systematically capture and evaluate PTs embedded in real-world scam reports. In particular, PsyScam bridges psychology and real-world cyber security analysis through collecting a wide range of scam reports from six public platforms and grounding its annotations in well-established cognitive and psychological theories. We further demonstrate PsyScam's utility through three downstream tasks: PT classification, scam completion, and scam augmentation. Experimental results show that PsyScam presents significant challenges to existing models in both detecting and generating scam content based on the PTs used by real-world scammers. Our code and dataset are available at: https://anonymous.4open.science/r/PsyScam-66E4.
Related papers
- AI persuading AI vs AI persuading Humans: LLMs' Differential Effectiveness in Promoting Pro-Environmental Behavior [70.24245082578167]
Pro-environmental behavior (PEB) is vital to combat climate change, yet turning awareness into intention and action remains elusive.<n>We explore large language models (LLMs) as tools to promote PEB, comparing their impact across 3,200 participants.<n>Results reveal a "synthetic persuasion paradox": synthetic and simulated agents significantly affect their post-intervention PEB stance, while human responses barely shift.
arXiv Detail & Related papers (2025-03-03T21:40:55Z) - "It Warned Me Just at the Right Moment": Exploring LLM-based Real-time Detection of Phone Scams [21.992539308179126]
We propose a framework for modeling scam calls and introduce an LLM-based real-time detection approach.<n>We evaluate the method's performance and analyze key factors influencing its effectiveness.
arXiv Detail & Related papers (2025-02-06T10:57:05Z) - Exposing LLM Vulnerabilities: Adversarial Scam Detection and Performance [16.9071617169937]
This paper investigates the vulnerabilities of Large Language Models (LLMs) when facing adversarial scam messages for the task of scam detection.<n>We created a comprehensive dataset with fine-grained labels of scam messages, including both original and adversarial scam messages.<n>Our analysis showed how adversarial examples took advantage of vulnerabilities of a LLM, leading to high misclassification rate.
arXiv Detail & Related papers (2024-12-01T00:13:28Z) - Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Combating Phone Scams with LLM-based Detection: Where Do We Stand? [1.8979188847659796]
This research explores the potential of large language models (LLMs) to provide detection of fraudulent phone calls.
LLMs-based detectors can identify potential scams as they occur, offering immediate protection to users.
arXiv Detail & Related papers (2024-09-18T02:14:30Z) - Automatic Scam-Baiting Using ChatGPT [0.46040036610482665]
We report on the results of a month-long experiment comparing the effectiveness of two ChatGPT-based automatic scam-baiters to a control measure.
With engagement from over 250 real email fraudsters, we find that ChatGPT-based scam-baiters show a marked increase in scammer response rate and conversation length.
We discuss the implications of these results and practical considerations for wider deployment of automatic scam-baiting.
arXiv Detail & Related papers (2023-09-04T13:13:35Z) - Tainted Love: A Systematic Review of Online Romance Fraud [68.8204255655161]
Romance fraud involves cybercriminals engineering a romantic relationship on online dating platforms.
We characterise the literary landscape on romance fraud, advancing the understanding of researchers and practitioners.
Three main contributions were identified: profiles of romance scams, countermeasures for mitigating romance scams, and factors that predispose an individual to become a scammer or a victim.
arXiv Detail & Related papers (2023-02-28T20:34:07Z) - Combat AI With AI: Counteract Machine-Generated Fake Restaurant Reviews
on Social Media [77.34726150561087]
We propose to leverage the high-quality elite Yelp reviews to generate fake reviews from the OpenAI GPT review creator.
We apply the model to predict non-elite reviews and identify the patterns across several dimensions.
We show that social media platforms are continuously challenged by machine-generated fake reviews.
arXiv Detail & Related papers (2023-02-10T19:40:10Z) - Recent trends in Social Engineering Scams and Case study of Gift Card
Scam [4.345672405192058]
Social engineering scams (SES) has been existed since the adoption of the telecommunications by humankind.
Recent trends of various social engineering scams targeting the innocent people all over the world.
Case study of real-time gift card scam targeting various enterprise organization customers.
arXiv Detail & Related papers (2021-10-13T04:17:02Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.