AI-Powered Spearphishing Cyber Attacks: Fact or Fiction?
- URL: http://arxiv.org/abs/2502.00961v1
- Date: Mon, 03 Feb 2025 00:02:01 GMT
- Title: AI-Powered Spearphishing Cyber Attacks: Fact or Fiction?
- Authors: Matthew Kemp, Harsha Kalutarage, M. Omar Al-Kadri,
- Abstract summary: Deepfake technology is capable of replacing the likeness or voice of one individual with another with alarming accuracy.
This paper investigates the threat posed by malicious use of this technology, particularly in the form of spearphishing attacks.
It uses deepfake technology to create spearphishing-like attack scenarios and validate them against average individuals.
- Score: 0.0
- License:
- Abstract: Due to society's continuing technological advance, the capabilities of machine learning-based artificial intelligence systems continue to expand and influence a wider degree of topics. Alongside this expansion of technology, there is a growing number of individuals willing to misuse these systems to defraud and mislead others. Deepfake technology, a set of deep learning algorithms that are capable of replacing the likeness or voice of one individual with another with alarming accuracy, is one of these technologies. This paper investigates the threat posed by malicious use of this technology, particularly in the form of spearphishing attacks. It uses deepfake technology to create spearphishing-like attack scenarios and validate them against average individuals. Experimental results show that 66% of participants failed to identify AI created audio as fake while 43% failed to identify such videos as fake, confirming the growing fear of threats posed by the use of these technologies by cybercriminals.
Related papers
- Deepfake Media Forensics: State of the Art and Challenges Ahead [51.33414186878676]
AI-generated synthetic media, also called Deepfakes, have influenced so many domains, from entertainment to cybersecurity.
Deepfake detection has become a vital area of research, focusing on identifying subtle inconsistencies and artifacts with machine learning techniques.
This paper reviews the primary algorithms that address these challenges, examining their advantages, limitations, and future prospects.
arXiv Detail & Related papers (2024-08-01T08:57:47Z) - Unmasking Illusions: Understanding Human Perception of Audiovisual Deepfakes [49.81915942821647]
This paper aims to evaluate the human ability to discern deepfake videos through a subjective study.
We present our findings by comparing human observers to five state-ofthe-art audiovisual deepfake detection models.
We found that all AI models performed better than humans when evaluated on the same 40 videos.
arXiv Detail & Related papers (2024-05-07T07:57:15Z) - Factuality Challenges in the Era of Large Language Models [113.3282633305118]
Large Language Models (LLMs) generate false, erroneous, or misleading content.
LLMs can be exploited for malicious applications.
This poses a significant challenge to society in terms of the potential deception of users.
arXiv Detail & Related papers (2023-10-08T14:55:02Z) - Discussion Paper: The Threat of Real Time Deepfakes [7.714772499501984]
Deepfakes are being used to spread misinformation, enable scams, perform fraud, and blackmail the innocent.
In this paper, we discuss the implications of this emerging threat, identify the challenges with preventing these attacks and suggest a better direction for researching stronger defences.
arXiv Detail & Related papers (2023-06-04T21:40:11Z) - Deep Fake Detection, Deterrence and Response: Challenges and
Opportunities [3.411353611073677]
78% of Canadian organizations experienced at least one successful cyberattack in 2020.
Specialists predict that the global loss from cybercrime will reach 10.5 trillion US dollars annually by 2025.
Deepfakes garnered attention for their potential use in creating fake news, hoaxes, revenge porn, and financial fraud.
arXiv Detail & Related papers (2022-11-26T21:23:30Z) - Artificial Intelligence for Cybersecurity: Threats, Attacks and
Mitigation [1.80476943513092]
The surging menace of cyber-attacks got a jolt from the recent advancements in Artificial Intelligence.
The intervention of AI not only automates a particular task but also improves efficiency by many folds.
This article discusses cybersecurity and cyber threats along with both conventional and intelligent ways of defense against cyber-attacks.
arXiv Detail & Related papers (2022-09-27T15:20:23Z) - DF-Captcha: A Deepfake Captcha for Preventing Fake Calls [7.714772499501984]
Social engineering (SE) is a form of deception that aims to trick people into giving access to data, information, networks and even money.
Deepfake technology can be deployed in real-time to clone someone's voice in a phone call or reenact a face in a video call.
We propose a lightweight application which can protect organizations and individuals from deepfake SE attacks.
arXiv Detail & Related papers (2022-08-17T20:40:54Z) - Using Deep Learning to Detecting Deepfakes [0.0]
Deepfakes are videos or images that replace one persons face with another computer-generated face, often a more recognizable person in society.
To combat this online threat, researchers have developed models that are designed to detect deepfakes.
This study looks at various deepfake detection models that use deep learning algorithms to combat this looming threat.
arXiv Detail & Related papers (2022-07-27T17:05:16Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.