Bridging the Protection Gap: Innovative Approaches to Shield Older Adults from AI-Enhanced Scams
- URL: http://arxiv.org/abs/2409.18249v1
- Date: Thu, 26 Sep 2024 19:46:50 GMT
- Title: Bridging the Protection Gap: Innovative Approaches to Shield Older Adults from AI-Enhanced Scams
- Authors: LD Herrera, London Van Sickle, Ashley Podhradsky,
- Abstract summary: Numerous indications suggest that scammers are already using AI to enhance already successful scams.
This paper explores the future of AI in scams affecting older adults by identifying current vulnerabilities and recommending updated defensive measures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) is rapidly gaining popularity as individuals, groups, and organizations discover and apply its expanding capabilities. Generative AI creates or alters various content types including text, image, audio, and video that are realistic and challenging to identify as AI-generated constructs. However, guardrails preventing malicious use of AI are easily bypassed. Numerous indications suggest that scammers are already using AI to enhance already successful scams, improving scam effectiveness, speed and credibility, while reducing detectability of scams that target older adults, who are known to be slow to adopt new technologies. Through hypothetical cases analysis of two leading scams, the tech support scams and the romance scams, this paper explores the future of AI in scams affecting older adults by identifying current vulnerabilities and recommending updated defensive measures focusing the establishment of a reliable support network offering elevated support to increase confidence and ability to defend against AI-enhanced scams.
Related papers
- Trojans in Artificial Intelligence (TrojAI) Final Report [52.6138928911574]
TrojAI was launched to confront an emerging vulnerability in modern artificial intelligence: the threat of AI Trojans.<n>TrojAI helped to map out the complex nature of the threat and pioneered foundational detection methods.<n>Report concludes with lessons learned and recommendations for advancing AI security research.
arXiv Detail & Related papers (2026-02-06T19:52:14Z) - Intergenerational Support for Deepfake Scams Targeting Older Adults [1.3871135653459332]
Deepfake scams produce convincing audio and visual impersonations of trusted family members, often grandchildren, in real time.<n>These attacks fabricate urgent scenarios, such as legal or medical emergencies, to socially engineer older adults into transferring money.<n>This study explores older adults' perceptions of these emerging threats and their responses.<n>We identify opportunities to engage youth as active partners in enhancing resilience across generations.
arXiv Detail & Related papers (2025-08-15T16:37:59Z) - Exploiting Jailbreaking Vulnerabilities in Generative AI to Bypass Ethical Safeguards for Facilitating Phishing Attacks [0.0]
This study investigates how GenAI powered services can be exploited via jailbreaking techniques to bypass ethical safeguards.<n>We used ChatGPT 4o Mini selected for its accessibility and status as the latest publicly available model as a representative GenAI system.<n>Our findings reveal that the model could successfully guide novice users in executing phishing attacks across various vectors, including web, email, SMS (smishing), and voice (vishing)
arXiv Detail & Related papers (2025-07-16T12:32:46Z) - Superintelligence Strategy: Expert Version [64.7113737051525]
Destabilizing AI developments could raise the odds of great-power conflict.
Superintelligence -- AI vastly better than humans at nearly all cognitive tasks -- is now anticipated by AI researchers.
We introduce the concept of Mutual Assured AI Malfunction.
arXiv Detail & Related papers (2025-03-07T17:53:24Z) - AI versus AI in Financial Crimes and Detection: GenAI Crime Waves to Co-Evolutionary AI [5.93311361936097]
GenAI has a transformative effect on financial crimes and fraud.
As crime patterns become more intricate, personalized, and elusive, deploying effective defensive AI strategies becomes indispensable.
This paper examines the latest trends in AI/ML-driven financial crimes and detection systems.
arXiv Detail & Related papers (2024-09-30T15:41:41Z) - Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks [0.0]
This paper delves into the escalating threat posed by the misuse of AI, specifically through the use of Large Language Models (LLMs)
Through a series of controlled experiments, the paper demonstrates how these models can be manipulated to bypass ethical and privacy safeguards to effectively generate cyber attacks.
We also introduce Occupy AI, a customized, finetuned LLM specifically engineered to automate and execute cyberattacks.
arXiv Detail & Related papers (2024-08-23T02:56:13Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Decoding the Threat Landscape : ChatGPT, FraudGPT, and WormGPT in Social Engineering Attacks [0.0]
Generative AI models have revolutionized the field of cyberattacks, empowering malicious actors to craft convincing and personalized phishing lures.
These models, ChatGPT, FraudGPT, and WormGPT, have augmented existing threats and ushered in new dimensions of risk.
To counter these threats, we outline a range of strategies, including traditional security measures, AI-powered security solutions, and collaborative approaches in cybersecurity.
arXiv Detail & Related papers (2023-10-09T10:31:04Z) - The Manipulation Problem: Conversational AI as a Threat to Epistemic
Agency [0.0]
The technology of Conversational AI has made significant advancements over the last eighteen months.
conversational agents are likely to be deployed in the near future that are designed to pursue targeted influence objectives.
Sometimes referred to as the "AI Manipulation Problem," the emerging risk is that consumers will unwittingly engage in real-time dialog with predatory AI agents.
arXiv Detail & Related papers (2023-06-19T04:09:16Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Security and Privacy for Artificial Intelligence: Opportunities and
Challenges [11.368470074697747]
In recent years, most AI models are vulnerable to advanced and sophisticated hacking techniques.
This challenge has motivated concerted research efforts into adversarial AI.
We present a holistic cyber security review that demonstrates adversarial attacks against AI applications.
arXiv Detail & Related papers (2021-02-09T06:06:13Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.