Synthesizing Black-box Anti-forensics DeepFakes with High Visual Quality
- URL: http://arxiv.org/abs/2312.10713v1
- Date: Sun, 17 Dec 2023 13:12:34 GMT
- Title: Synthesizing Black-box Anti-forensics DeepFakes with High Visual Quality
- Authors: Bing Fan, Shu Hu, Feng Ding
- Abstract summary: We propose a method to generate novel adversarial sharpening masks for launching black-box anti-forensics attacks.
We prove that the proposed method could successfully disrupt the state-of-the-art DeepFake detectors.
Compared with the images processed by existing DeepFake anti-forensics methods, the visual qualities of anti-forensics DeepFakes rendered by the proposed method are significantly refined.
- Score: 11.496745237311456
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: DeepFake, an AI technology for creating facial forgeries, has garnered global
attention. Amid such circumstances, forensics researchers focus on developing
defensive algorithms to counter these threats. In contrast, there are
techniques developed for enhancing the aggressiveness of DeepFake, e.g.,
through anti-forensics attacks, to disrupt forensic detectors. However, such
attacks often sacrifice image visual quality for improved undetectability. To
address this issue, we propose a method to generate novel adversarial
sharpening masks for launching black-box anti-forensics attacks. Unlike many
existing arts, with such perturbations injected, DeepFakes could achieve high
anti-forensics performance while exhibiting pleasant sharpening visual effects.
After experimental evaluations, we prove that the proposed method could
successfully disrupt the state-of-the-art DeepFake detectors. Besides, compared
with the images processed by existing DeepFake anti-forensics methods, the
visual qualities of anti-forensics DeepFakes rendered by the proposed method
are significantly refined.
Related papers
- Active Fake: DeepFake Camouflage [11.976015496109525]
Face-Swap DeepFake fabricates behaviors by swapping original faces with synthesized ones.
Existing forensic methods, primarily based on Deep Neural Networks (DNNs), effectively expose these manipulations and have become important authenticity indicators.
We introduce a new framework for creating DeepFake camouflage that generates blending inconsistencies while ensuring imperceptibility, effectiveness, and transferability.
arXiv Detail & Related papers (2024-09-05T02:46:36Z) - UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - GazeForensics: DeepFake Detection via Gaze-guided Spatial Inconsistency
Learning [63.547321642941974]
We introduce GazeForensics, an innovative DeepFake detection method that utilizes gaze representation obtained from a 3D gaze estimation model.
Experiment results reveal that our proposed GazeForensics outperforms the current state-of-the-art methods.
arXiv Detail & Related papers (2023-11-13T04:48:33Z) - On the Vulnerability of DeepFake Detectors to Attacks Generated by
Denoising Diffusion Models [0.5827521884806072]
We investigate the vulnerability of single-image deepfake detectors to black-box attacks created by the newest generation of generative methods.
Our experiments are run on FaceForensics++, a widely used deepfake benchmark consisting of manipulated images.
Our findings indicate that employing just a single denoising diffusion step in the reconstruction process of a deepfake can significantly reduce the likelihood of detection.
arXiv Detail & Related papers (2023-07-11T15:57:51Z) - Attacking Image Splicing Detection and Localization Algorithms Using
Synthetic Traces [17.408491376238008]
Recent advances in deep learning have enabled forensics researchers to develop a new class of image splicing detection and localization algorithms.
These algorithms identify spliced content by detecting localized inconsistencies in forensic traces using Siamese neural networks.
In this paper, we propose a new GAN-based anti-forensic attack that is able to fool state-of-the-art splicing detection and localization algorithms.
arXiv Detail & Related papers (2022-11-22T15:07:16Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Exploring Frequency Adversarial Attacks for Face Forgery Detection [59.10415109589605]
We propose a frequency adversarial attack method against face forgery detectors.
Inspired by the idea of meta-learning, we also propose a hybrid adversarial attack that performs attacks in both the spatial and frequency domains.
arXiv Detail & Related papers (2022-03-29T15:34:13Z) - DeepFake Detection with Inconsistent Head Poses: Reproducibility and
Analysis [0.0]
We analyze an existing DeepFake detection technique based on head pose estimation.
Our results correct the current literature's perception of state of the art performance for DeepFake detection.
arXiv Detail & Related papers (2021-08-28T22:56:09Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Perception Matters: Exploring Imperceptible and Transferable
Anti-forensics for GAN-generated Fake Face Imagery Detection [28.620523463372177]
generative adversarial networks (GANs) can generate photo-realistic fake facial images which are perceptually indistinguishable from real face photos.
Here we explore more textitimperceptible and textittransferable anti-forensic for fake face imagery detection based on adversarial attacks.
We propose a novel adversarial attack method, better suitable for image anti-forensics, in the transformed color domain by considering visual perception.
arXiv Detail & Related papers (2020-10-29T18:54:06Z) - Deep Spatial Gradient and Temporal Depth Learning for Face Anti-spoofing [61.82466976737915]
Depth supervised learning has been proven as one of the most effective methods for face anti-spoofing.
We propose a new approach to detect presentation attacks from multiple frames based on two insights.
The proposed approach achieves state-of-the-art results on five benchmark datasets.
arXiv Detail & Related papers (2020-03-18T06:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.