Perception Matters: Exploring Imperceptible and Transferable
Anti-forensics for GAN-generated Fake Face Imagery Detection
- URL: http://arxiv.org/abs/2010.15886v1
- Date: Thu, 29 Oct 2020 18:54:06 GMT
- Title: Perception Matters: Exploring Imperceptible and Transferable
Anti-forensics for GAN-generated Fake Face Imagery Detection
- Authors: Yongwei Wang, Xin Ding, Li Ding, Rabab Ward, Z. Jane Wang
- Abstract summary: generative adversarial networks (GANs) can generate photo-realistic fake facial images which are perceptually indistinguishable from real face photos.
Here we explore more textitimperceptible and textittransferable anti-forensic for fake face imagery detection based on adversarial attacks.
We propose a novel adversarial attack method, better suitable for image anti-forensics, in the transformed color domain by considering visual perception.
- Score: 28.620523463372177
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, generative adversarial networks (GANs) can generate photo-realistic
fake facial images which are perceptually indistinguishable from real face
photos, promoting research on fake face detection. Though fake face forensics
can achieve high detection accuracy, their anti-forensic counterparts are less
investigated. Here we explore more \textit{imperceptible} and
\textit{transferable} anti-forensics for fake face imagery detection based on
adversarial attacks. Since facial and background regions are often smooth, even
small perturbation could cause noticeable perceptual impairment in fake face
images. Therefore it makes existing adversarial attacks ineffective as an
anti-forensic method. Our perturbation analysis reveals the intuitive reason of
the perceptual degradation issue when directly applying existing attacks. We
then propose a novel adversarial attack method, better suitable for image
anti-forensics, in the transformed color domain by considering visual
perception. Simple yet effective, the proposed method can fool both deep
learning and non-deep learning based forensic detectors, achieving higher
attack success rate and significantly improved visual quality. Specially, when
adversaries consider imperceptibility as a constraint, the proposed
anti-forensic method can improve the average attack success rate by around 30\%
on fake face images over two baseline attacks. \textit{More imperceptible} and
\textit{more transferable}, the proposed method raises new security concerns to
fake face imagery detection. We have released our code for public use, and
hopefully the proposed method can be further explored in related forensic
applications as an anti-forensic benchmark.
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Semantic Contextualization of Face Forgery: A New Definition, Dataset, and Detection Method [77.65459419417533]
We put face forgery in a semantic context and define that computational methods that alter semantic face attributes are sources of face forgery.
We construct a large face forgery image dataset, where each image is associated with a set of labels organized in a hierarchical graph.
We propose a semantics-oriented face forgery detection method that captures label relations and prioritizes the primary task.
arXiv Detail & Related papers (2024-05-14T10:24:19Z) - Synthesizing Black-box Anti-forensics DeepFakes with High Visual Quality [11.496745237311456]
We propose a method to generate novel adversarial sharpening masks for launching black-box anti-forensics attacks.
We prove that the proposed method could successfully disrupt the state-of-the-art DeepFake detectors.
Compared with the images processed by existing DeepFake anti-forensics methods, the visual qualities of anti-forensics DeepFakes rendered by the proposed method are significantly refined.
arXiv Detail & Related papers (2023-12-17T13:12:34Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
Detection [58.1263969438364]
We propose adversarial head turn (AdvHeat) as the first attempt at 3D adversarial face views against deepfake detectors.
Experiments validate the vulnerability of various detectors to AdvHeat in realistic, black-box scenarios.
Additional analyses demonstrate that AdvHeat is better than conventional attacks on both the cross-detector transferability and robustness to defenses.
arXiv Detail & Related papers (2023-09-03T07:01:34Z) - Evading Forensic Classifiers with Attribute-Conditioned Adversarial
Faces [6.105361899083232]
We show that it is possible to successfully generate adversarial fake faces with a specified set of attributes.
We propose a framework to search for adversarial latent codes within the feature space of StyleGAN.
We also propose a meta-learning based optimization strategy to achieve transferable performance on unknown target models.
arXiv Detail & Related papers (2023-06-22T17:59:55Z) - Building an Invisible Shield for Your Portrait against Deepfakes [34.65356811439098]
We propose a novel framework - Integrity Encryptor, aiming to protect portraits in a proactive strategy.
Our methodology involves covertly encoding messages that are closely associated with key facial attributes into authentic images.
The modified facial attributes serve as a mean of detecting manipulated images through a comparison of the decoded messages.
arXiv Detail & Related papers (2023-05-22T10:01:28Z) - Information-containing Adversarial Perturbation for Combating Facial
Manipulation Systems [19.259372985094235]
Malicious applications of deep learning systems pose a serious threat to individuals' privacy and reputation.
We propose a novel two-tier protection method named Information-containing Adversarial Perturbation (IAP)
We use an encoder to map a facial image and its identity message to a cross-model adversarial example which can disrupt multiple facial manipulation systems.
arXiv Detail & Related papers (2023-03-21T06:48:14Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Exploring Frequency Adversarial Attacks for Face Forgery Detection [59.10415109589605]
We propose a frequency adversarial attack method against face forgery detectors.
Inspired by the idea of meta-learning, we also propose a hybrid adversarial attack that performs attacks in both the spatial and frequency domains.
arXiv Detail & Related papers (2022-03-29T15:34:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.