Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to
Adversarial Examples
- URL: http://arxiv.org/abs/2002.12749v3
- Date: Sat, 7 Nov 2020 22:09:38 GMT
- Title: Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to
Adversarial Examples
- Authors: Shehzeen Hussain, Paarth Neekhara, Malhar Jere, Farinaz Koushanfar and
Julian McAuley
- Abstract summary: Recent advances in video manipulation techniques have made the generation of fake videos more accessible than ever before.
Manipulated videos can fuel disinformation and reduce trust in media.
Recent developed Deepfake detection methods rely on deep neural networks (DNNs) to distinguish AI-generated fake videos from real videos.
- Score: 23.695497512694068
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in video manipulation techniques have made the generation of
fake videos more accessible than ever before. Manipulated videos can fuel
disinformation and reduce trust in media. Therefore detection of fake videos
has garnered immense interest in academia and industry. Recently developed
Deepfake detection methods rely on deep neural networks (DNNs) to distinguish
AI-generated fake videos from real videos. In this work, we demonstrate that it
is possible to bypass such detectors by adversarially modifying fake videos
synthesized using existing Deepfake generation methods. We further demonstrate
that our adversarial perturbations are robust to image and video compression
codecs, making them a real-world threat. We present pipelines in both white-box
and black-box attack scenarios that can fool DNN based Deepfake detectors into
classifying fake videos as real.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Comparative Analysis of Deep-Fake Algorithms [0.0]
Deepfakes, also known as deep learning-based fake videos, have become a major concern in recent years.
These deepfake videos can be used for malicious purposes such as spreading misinformation, impersonating individuals, and creating fake news.
Deepfake detection technologies use various approaches such as facial recognition, motion analysis, and audio-visual synchronization.
arXiv Detail & Related papers (2023-09-06T18:17:47Z) - Detecting Deepfake by Creating Spatio-Temporal Regularity Disruption [94.5031244215761]
We propose to boost the generalization of deepfake detection by distinguishing the "regularity disruption" that does not appear in real videos.
Specifically, by carefully examining the spatial and temporal properties, we propose to disrupt a real video through a Pseudo-fake Generator.
Such practice allows us to achieve deepfake detection without using fake videos and improves the generalization ability in a simple and efficient manner.
arXiv Detail & Related papers (2022-07-21T10:42:34Z) - A Survey of Deep Fake Detection for Trial Courts [2.320417845168326]
DeepFake algorithms can create fake images and videos that humans cannot distinguish from authentic ones.
It is become essential to detect fake videos to avoid spreading false information.
This paper presents a survey of methods used to detect DeepFakes and datasets available for detecting DeepFakes.
arXiv Detail & Related papers (2022-05-31T13:50:25Z) - Audio-Visual Person-of-Interest DeepFake Detection [77.04789677645682]
The aim of this work is to propose a deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world.
We leverage a contrastive learning paradigm to learn the moving-face and audio segment embeddings that are most discriminative for each identity.
Our method can detect both single-modality (audio-only, video-only) and multi-modality (audio-video) attacks, and is robust to low-quality or corrupted videos.
arXiv Detail & Related papers (2022-04-06T20:51:40Z) - Watch Those Words: Video Falsification Detection Using Word-Conditioned
Facial Motion [82.06128362686445]
We propose a multi-modal semantic forensic approach to handle both cheapfakes and visually persuasive deepfakes.
We leverage the idea of attribution to learn person-specific biometric patterns that distinguish a given speaker from others.
Unlike existing person-specific approaches, our method is also effective against attacks that focus on lip manipulation.
arXiv Detail & Related papers (2021-12-21T01:57:04Z) - Adversarially robust deepfake media detection using fused convolutional
neural network predictions [79.00202519223662]
Current deepfake detection systems struggle against unseen data.
We employ three different deep Convolutional Neural Network (CNN) models to classify fake and real images extracted from videos.
The proposed technique outperforms state-of-the-art models with 96.5% accuracy.
arXiv Detail & Related papers (2021-02-11T11:28:00Z) - Detecting Deepfake Videos Using Euler Video Magnification [1.8506048493564673]
Deepfake videos are manipulating videos using advanced machine learning techniques.
In this paper, we examine a technique for possible identification of deepfake videos.
Our approach uses features extracted from the Euler technique to train three models to classify counterfeit and unaltered videos.
arXiv Detail & Related papers (2021-01-27T17:37:23Z) - WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection [82.42495493102805]
We introduce a new dataset WildDeepfake which consists of 7,314 face sequences extracted from 707 deepfake videos collected completely from the internet.
We conduct a systematic evaluation of a set of baseline detection networks on both existing and our WildDeepfake datasets, and show that WildDeepfake is indeed a more challenging dataset, where the detection performance can decrease drastically.
arXiv Detail & Related papers (2021-01-05T11:10:32Z) - How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via
Interpreting Residuals with Biological Signals [9.918684475252636]
We propose an approach not only to separate deep fakes from real, but also to discover the specific generative model behind a deep fake.
Our results indicate that our approach can detect fake videos with 97.29% accuracy, and the source model with 93.39% accuracy.
arXiv Detail & Related papers (2020-08-26T03:35:47Z) - Deepfake Video Forensics based on Transfer Learning [0.0]
"Deepfake" can create fake images and videos that humans cannot differentiate from the genuine ones.
This paper details retraining the image classification models to apprehend the features from each deepfake video frames.
When checking Deepfake videos, this technique received more than 87 per cent accuracy.
arXiv Detail & Related papers (2020-04-29T13:21:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.