Discussion Paper: The Threat of Real Time Deepfakes
- URL: http://arxiv.org/abs/2306.02487v1
- Date: Sun, 4 Jun 2023 21:40:11 GMT
- Title: Discussion Paper: The Threat of Real Time Deepfakes
- Authors: Guy Frankovits and Yisroel Mirsky
- Abstract summary: Deepfakes are being used to spread misinformation, enable scams, perform fraud, and blackmail the innocent.
In this paper, we discuss the implications of this emerging threat, identify the challenges with preventing these attacks and suggest a better direction for researching stronger defences.
- Score: 7.714772499501984
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative deep learning models are able to create realistic audio and video.
This technology has been used to impersonate the faces and voices of
individuals. These ``deepfakes'' are being used to spread misinformation,
enable scams, perform fraud, and blackmail the innocent. The technology
continues to advance and today attackers have the ability to generate deepfakes
in real-time. This new capability poses a significant threat to society as
attackers begin to exploit the technology in advances social engineering
attacks. In this paper, we discuss the implications of this emerging threat,
identify the challenges with preventing these attacks and suggest a better
direction for researching stronger defences.
Related papers
- AI-Powered Spearphishing Cyber Attacks: Fact or Fiction? [0.0]
Deepfake technology is capable of replacing the likeness or voice of one individual with another with alarming accuracy.
This paper investigates the threat posed by malicious use of this technology, particularly in the form of spearphishing attacks.
It uses deepfake technology to create spearphishing-like attack scenarios and validate them against average individuals.
arXiv Detail & Related papers (2025-02-03T00:02:01Z) - Hiding Faces in Plain Sight: Defending DeepFakes by Disrupting Face Detection [56.289631511616975]
This paper investigates the feasibility of a proactive DeepFake defense framework, em FacePosion, to prevent individuals from becoming victims of DeepFake videos.
Based on FacePoison, we introduce em VideoFacePoison, a strategy that propagates FacePoison across video frames rather than applying them individually to each frame.
Our method is validated on five face detectors, and extensive experiments against eleven different DeepFake models demonstrate the effectiveness of disrupting face detectors to hinder DeepFake generation.
arXiv Detail & Related papers (2024-12-02T04:17:48Z) - Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Deepfake Media Forensics: State of the Art and Challenges Ahead [51.33414186878676]
AI-generated synthetic media, also called Deepfakes, have influenced so many domains, from entertainment to cybersecurity.
Deepfake detection has become a vital area of research, focusing on identifying subtle inconsistencies and artifacts with machine learning techniques.
This paper reviews the primary algorithms that address these challenges, examining their advantages, limitations, and future prospects.
arXiv Detail & Related papers (2024-08-01T08:57:47Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Identifying and Mitigating the Security Risks of Generative AI [179.2384121957896]
This paper reports the findings of a workshop held at Google on the dual-use dilemma posed by GenAI.
GenAI can be used just as well by attackers to generate new attacks and increase the velocity and efficacy of existing attacks.
We discuss short-term and long-term goals for the community on this topic.
arXiv Detail & Related papers (2023-08-28T18:51:09Z) - Deep Fake Detection, Deterrence and Response: Challenges and
Opportunities [3.411353611073677]
78% of Canadian organizations experienced at least one successful cyberattack in 2020.
Specialists predict that the global loss from cybercrime will reach 10.5 trillion US dollars annually by 2025.
Deepfakes garnered attention for their potential use in creating fake news, hoaxes, revenge porn, and financial fraud.
arXiv Detail & Related papers (2022-11-26T21:23:30Z) - DF-Captcha: A Deepfake Captcha for Preventing Fake Calls [7.714772499501984]
Social engineering (SE) is a form of deception that aims to trick people into giving access to data, information, networks and even money.
Deepfake technology can be deployed in real-time to clone someone's voice in a phone call or reenact a face in a video call.
We propose a lightweight application which can protect organizations and individuals from deepfake SE attacks.
arXiv Detail & Related papers (2022-08-17T20:40:54Z) - Using Deep Learning to Detecting Deepfakes [0.0]
Deepfakes are videos or images that replace one persons face with another computer-generated face, often a more recognizable person in society.
To combat this online threat, researchers have developed models that are designed to detect deepfakes.
This study looks at various deepfake detection models that use deep learning algorithms to combat this looming threat.
arXiv Detail & Related papers (2022-07-27T17:05:16Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - The Emerging Threats of Deepfake Attacks and Countermeasures [0.0]
Deepfake technology (DT) has taken a new level of sophistication.
Highlights the threats that are presented by deepfakes to businesses, politics, and judicial systems worldwide.
arXiv Detail & Related papers (2020-12-14T22:40:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.