Intergenerational Support for Deepfake Scams Targeting Older Adults
- URL: http://arxiv.org/abs/2508.11579v1
- Date: Fri, 15 Aug 2025 16:37:59 GMT
- Title: Intergenerational Support for Deepfake Scams Targeting Older Adults
- Authors: Karina LaRubbio, Alyssa Lanter, Seihyun Lee, Mahima Ramesh, Diana Freed,
- Abstract summary: Deepfake scams produce convincing audio and visual impersonations of trusted family members, often grandchildren, in real time.<n>These attacks fabricate urgent scenarios, such as legal or medical emergencies, to socially engineer older adults into transferring money.<n>This study explores older adults' perceptions of these emerging threats and their responses.<n>We identify opportunities to engage youth as active partners in enhancing resilience across generations.
- Score: 1.3871135653459332
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: AI-enhanced scams now employ deepfake technology to produce convincing audio and visual impersonations of trusted family members, often grandchildren, in real time. These attacks fabricate urgent scenarios, such as legal or medical emergencies, to socially engineer older adults into transferring money. The realism of these AI-generated impersonations undermines traditional cues used to detect fraud, making them a powerful tool for financial exploitation. In this study, we explore older adults' perceptions of these emerging threats and their responses, with a particular focus on the role of youth, who may also be impacted by having their identities exploited, in supporting older family members' online safety. We conducted focus groups with 37 older adults (ages 65+) to examine their understanding of deepfake impersonation scams and the value of intergenerational technology support. Findings suggest that older adults frequently rely on trusted relationships to detect scams and develop protective practices. Based on this, we identify opportunities to engage youth as active partners in enhancing resilience across generations.
Related papers
- Self-Consolidation for Self-Evolving Agents [51.94826934403236]
Large language model (LLM) agents operate as static systems, lacking the ability to evolve through lifelong interaction.<n>We propose a novel self-evolving framework for LLM agents that introduces a complementary evolution mechanism.
arXiv Detail & Related papers (2026-02-02T11:16:07Z) - Experiencer, Helper, or Observer: Online Fraud Intervention for Older Adults Through Role-based Simulation [14.8124073941176]
ROLESafe is an anti-fraud educational intervention in which older adults learn through different learning roles.<n>In a study with 144 older adults in China, we found that the Experiencer and Helper roles significantly improved participants' ability to identify online fraud.
arXiv Detail & Related papers (2026-01-18T09:15:51Z) - When AI Agents Collude Online: Financial Fraud Risks by Collaborative LLM Agents on Social Platforms [101.2197679948061]
We study the risks of collective financial fraud in large-scale multi-agent systems powered by large language model (LLM) agents.<n>We present MultiAgentFraudBench, a large-scale benchmark for simulating financial fraud scenarios.
arXiv Detail & Related papers (2025-11-09T16:30:44Z) - Exploiting Jailbreaking Vulnerabilities in Generative AI to Bypass Ethical Safeguards for Facilitating Phishing Attacks [0.0]
This study investigates how GenAI powered services can be exploited via jailbreaking techniques to bypass ethical safeguards.<n>We used ChatGPT 4o Mini selected for its accessibility and status as the latest publicly available model as a representative GenAI system.<n>Our findings reveal that the model could successfully guide novice users in executing phishing attacks across various vectors, including web, email, SMS (smishing), and voice (vishing)
arXiv Detail & Related papers (2025-07-16T12:32:46Z) - "It Warned Me Just at the Right Moment": Exploring LLM-based Real-time Detection of Phone Scams [21.992539308179126]
We propose a framework for modeling scam calls and introduce an LLM-based real-time detection approach.<n>We evaluate the method's performance and analyze key factors influencing its effectiveness.
arXiv Detail & Related papers (2025-02-06T10:57:05Z) - Investigating an Intelligent System to Monitor \& Explain Abnormal Activity Patterns of Older Adults [52.40826527071519]
Despite the growing potential of older adult care technologies, the adoption of these technologies remains challenging.<n>This work conducted a focus-group session with family caregivers to scope designs of the older adult care technology.<n>We developed a high-fidelity prototype and conducted its qualitative study with professional caregivers and older adults.
arXiv Detail & Related papers (2025-01-30T03:21:14Z) - Bridging the Protection Gap: Innovative Approaches to Shield Older Adults from AI-Enhanced Scams [0.0]
Numerous indications suggest that scammers are already using AI to enhance already successful scams.
This paper explores the future of AI in scams affecting older adults by identifying current vulnerabilities and recommending updated defensive measures.
arXiv Detail & Related papers (2024-09-26T19:46:50Z) - Combating Phone Scams with LLM-based Detection: Where Do We Stand? [1.8979188847659796]
This research explores the potential of large language models (LLMs) to provide detection of fraudulent phone calls.
LLMs-based detectors can identify potential scams as they occur, offering immediate protection to users.
arXiv Detail & Related papers (2024-09-18T02:14:30Z) - Deepfake Media Forensics: State of the Art and Challenges Ahead [51.33414186878676]
AI-generated synthetic media, also called Deepfakes, have influenced so many domains, from entertainment to cybersecurity.
Deepfake detection has become a vital area of research, focusing on identifying subtle inconsistencies and artifacts with machine learning techniques.
This paper reviews the primary algorithms that address these challenges, examining their advantages, limitations, and future prospects.
arXiv Detail & Related papers (2024-08-01T08:57:47Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models [54.19289900203071]
The rise in popularity of text-to-image generative artificial intelligence has attracted widespread public interest.
We demonstrate that this technology can be attacked to generate content that subtly manipulates its users.
We propose a Backdoor Attack on text-to-image Generative Models (BAGM)
Our attack is the first to target three popular text-to-image generative models across three stages of the generative process.
arXiv Detail & Related papers (2023-07-31T08:34:24Z) - Fragments of the Past: Curating Peer Support with Perpetrators of
Domestic Violence [88.37416552778178]
We report on a ten-month study where we worked with six support workers and eighteen perpetrators in the design and deployment of Fragments of the Past.
We share how crafting digitally-augmented artefacts - 'fragments' - of experiences of desisting from violence can translate messages for motivation and rapport between peers.
These insights provide the basis for practical considerations for future network design with challenging populations.
arXiv Detail & Related papers (2021-07-09T22:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.