Black-Box Attack against GAN-Generated Image Detector with Contrastive
Perturbation
- URL: http://arxiv.org/abs/2211.03509v1
- Date: Mon, 7 Nov 2022 12:56:14 GMT
- Title: Black-Box Attack against GAN-Generated Image Detector with Contrastive
Perturbation
- Authors: Zijie Lou, Gang Cao, Man Lin
- Abstract summary: We propose a new black-box attack method against GAN-generated image detectors.
A novel contrastive learning strategy is adopted to train the encoder-decoder network based anti-forensic model.
The proposed attack effectively reduces the accuracy of three state-of-the-art detectors on six popular GANs.
- Score: 0.4297070083645049
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visually realistic GAN-generated facial images raise obvious concerns on
potential misuse. Many effective forensic algorithms have been developed to
detect such synthetic images in recent years. It is significant to assess the
vulnerability of such forensic detectors against adversarial attacks. In this
paper, we propose a new black-box attack method against GAN-generated image
detectors. A novel contrastive learning strategy is adopted to train the
encoder-decoder network based anti-forensic model under a contrastive loss
function. GAN images and their simulated real counterparts are constructed as
positive and negative samples, respectively. Leveraging on the trained attack
model, imperceptible contrastive perturbation could be applied to input
synthetic images for removing GAN fingerprint to some extent. As such, existing
GAN-generated image detectors are expected to be deceived. Extensive
experimental results verify that the proposed attack effectively reduces the
accuracy of three state-of-the-art detectors on six popular GANs. High visual
quality of the attacked images is also achieved. The source code will be
available at https://github.com/ZXMMD/BAttGAND.
Related papers
- Vulnerabilities in AI-generated Image Detection: The Challenge of Adversarial Attacks [17.87119255294563]
We investigate the vulnerability of state-of-the-art AIGI detectors against adversarial attack under white-box and black-box settings.
We propose a new attack containing two main parts. First, inspired by the obvious difference between real images and fake images in the frequency domain, we add perturbations under the frequency domain to push the image away from its original frequency distribution.
We show that adversarial attack is truly a real threat to AIGI detectors, because FPBA can deliver successful black-box attacks across models, generators, defense methods, and even evade cross-generator detection.
arXiv Detail & Related papers (2024-07-30T14:07:17Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models! [52.0855711767075]
EvoSeed is an evolutionary strategy-based algorithmic framework for generating photo-realistic natural adversarial samples.
We employ CMA-ES to optimize the search for an initial seed vector, which, when processed by the Conditional Diffusion Model, results in the natural adversarial sample misclassified by the Model.
Experiments show that generated adversarial images are of high image quality, raising concerns about generating harmful content bypassing safety classifiers.
arXiv Detail & Related papers (2024-02-07T09:39:29Z) - Attacking Image Splicing Detection and Localization Algorithms Using
Synthetic Traces [17.408491376238008]
Recent advances in deep learning have enabled forensics researchers to develop a new class of image splicing detection and localization algorithms.
These algorithms identify spliced content by detecting localized inconsistencies in forensic traces using Siamese neural networks.
In this paper, we propose a new GAN-based anti-forensic attack that is able to fool state-of-the-art splicing detection and localization algorithms.
arXiv Detail & Related papers (2022-11-22T15:07:16Z) - Misleading Deep-Fake Detection with GAN Fingerprints [14.459389888856412]
We show that an adversary can remove indicative artifacts, the GAN fingerprint, directly from the frequency spectrum of a generated image.
Our results show that an adversary can often remove GAN fingerprints and thus evade the detection of generated images.
arXiv Detail & Related papers (2022-05-25T07:32:12Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Making GAN-Generated Images Difficult To Spot: A New Attack Against
Synthetic Image Detectors [24.809185168969066]
We propose a new anti-forensic attack capable of fooling GAN-generated image detectors.
Our attack uses an adversarially trained generator to synthesize traces that these detectors associate with real images.
We show that our attack can fool eight state-of-the-art detection CNNs with synthetic images created using seven different GANs.
arXiv Detail & Related papers (2021-04-25T05:56:57Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Exploring Adversarial Fake Images on Face Manifold [5.26916168336451]
Images synthesized by powerful generative adversarial network (GAN) based methods have drawn moral and privacy concerns.
In this paper, instead of adding adversarial noise, we optimally search adversarial points on face manifold to generate anti-forensic fake face images.
arXiv Detail & Related papers (2021-01-09T02:08:59Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.