Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors
- URL: http://arxiv.org/abs/2204.08612v1
- Date: Tue, 19 Apr 2022 02:24:30 GMT
- Title: Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors
- Authors: Nyee Thoang Lim, Meng Yi Kuan, Muxin Pu, Mei Kuan Lim, Chun Yong Chong
- Abstract summary: There is a dire need for deepfake detection technology to help spot deepfake media.
Current deepfake detection models are able to achieve outstanding accuracy (>90%)
This study identifies makeup application as an adversarial attack that could fool deepfake detectors.
- Score: 2.0649235321315285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deepfakes utilise Artificial Intelligence (AI) techniques to create synthetic
media where the likeness of one person is replaced with another. There are
growing concerns that deepfakes can be maliciously used to create misleading
and harmful digital contents. As deepfakes become more common, there is a dire
need for deepfake detection technology to help spot deepfake media. Present
deepfake detection models are able to achieve outstanding accuracy (>90%).
However, most of them are limited to within-dataset scenario, where the same
dataset is used for training and testing. Most models do not generalise well
enough in cross-dataset scenario, where models are tested on unseen datasets
from another source. Furthermore, state-of-the-art deepfake detection models
rely on neural network-based classification models that are known to be
vulnerable to adversarial attacks. Motivated by the need for a robust deepfake
detection model, this study adapts metamorphic testing (MT) principles to help
identify potential factors that could influence the robustness of the examined
model, while overcoming the test oracle problem in this domain. Metamorphic
testing is specifically chosen as the testing technique as it fits our demand
to address learning-based system testing with probabilistic outcomes from
largely black-box components, based on potentially large input domains. We
performed our evaluations on MesoInception-4 and TwoStreamNet models, which are
the state-of-the-art deepfake detection models. This study identified makeup
application as an adversarial attack that could fool deepfake detectors. Our
experimental results demonstrate that both the MesoInception-4 and TwoStreamNet
models degrade in their performance by up to 30\% when the input data is
perturbed with makeup.
Related papers
- DF40: Toward Next-Generation Deepfake Detection [62.073997142001424]
existing works identify top-notch detection algorithms and models by adhering to the common practice: training detectors on one specific dataset and testing them on other prevalent deepfake datasets.
But can these stand-out "winners" be truly applied to tackle the myriad of realistic and diverse deepfakes lurking in the real world?
We construct a highly diverse deepfake detection dataset called DF40, which comprises 40 distinct deepfake techniques.
arXiv Detail & Related papers (2024-06-19T12:35:02Z) - Masked Conditional Diffusion Model for Enhancing Deepfake Detection [20.018495944984355]
We propose a Masked Conditional Diffusion Model (MCDM) for enhancing deepfake detection.
It generates a variety of forged faces from a masked pristine one, encouraging the deepfake detection model to learn generic and robust representations.
arXiv Detail & Related papers (2024-02-01T12:06:55Z) - AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors [24.78672820633581]
Deep generative models can create remarkably fake images while raising concerns about misinformation and copyright infringement.
Deepfake detection technique is developed to distinguish between real and fake images.
We propose a novel approach called AntifakePrompt, using Vision-Language Models and prompt tuning techniques.
arXiv Detail & Related papers (2023-10-26T14:23:45Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Fairness Evaluation in Deepfake Detection Models using Metamorphic
Testing [2.0649235321315285]
This work is to evaluate how deepfake detection model behaves under such anomalies.
We have chosen MesoInception-4, a state-of-the-art deepfake detection model, as the target model.
We focus on revealing potential gender biases in DL and AI systems.
arXiv Detail & Related papers (2022-03-14T02:44:56Z) - Voice-Face Homogeneity Tells Deepfake [56.334968246631725]
Existing detection approaches contribute to exploring the specific artifacts in deepfake videos.
We propose to perform the deepfake detection from an unexplored voice-face matching view.
Our model obtains significantly improved performance as compared to other state-of-the-art competitors.
arXiv Detail & Related papers (2022-03-04T09:08:50Z) - Evaluating Deep Learning Models and Adversarial Attacks on
Accelerometer-Based Gesture Authentication [6.961253535504979]
We use a deep convolutional generative adversarial network (DC-GAN) to create adversarial samples.
We show that our deep learning model is surprisingly robust to such an attack scenario.
arXiv Detail & Related papers (2021-10-03T00:15:50Z) - TAR: Generalized Forensic Framework to Detect Deepfakes using Weakly
Supervised Learning [17.40885531847159]
Deepfakes have become a critical social problem, and detecting them is of utmost importance.
In this work, we introduce a practical digital forensic tool to detect different types of deepfakes simultaneously.
We develop an autoencoder-based detection model with Residual blocks and sequentially perform transfer learning to detect different types of deepfakes simultaneously.
arXiv Detail & Related papers (2021-05-13T07:31:08Z) - Adversarially robust deepfake media detection using fused convolutional
neural network predictions [79.00202519223662]
Current deepfake detection systems struggle against unseen data.
We employ three different deep Convolutional Neural Network (CNN) models to classify fake and real images extracted from videos.
The proposed technique outperforms state-of-the-art models with 96.5% accuracy.
arXiv Detail & Related papers (2021-02-11T11:28:00Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.