A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative
Adversarial Network
- URL: http://arxiv.org/abs/2101.09568v1
- Date: Sat, 23 Jan 2021 19:31:59 GMT
- Title: A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative
Adversarial Network
- Authors: Xinwei Zhao, Chen Chen, Matthew C. Stamm
- Abstract summary: convolutional neural networks (CNNs) have become widely used in multimedia forensics.
Anti-forensic attacks have been developed to fool these CNN-based forensic algorithms.
We propose a new anti-forensic attack framework designed to remove forensic traces left by a variety of manipulation operations.
- Score: 24.032025811564814
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the development of deep learning, convolutional neural networks (CNNs)
have become widely used in multimedia forensics for tasks such as detecting and
identifying image forgeries. Meanwhile, anti-forensic attacks have been
developed to fool these CNN-based forensic algorithms. Previous anti-forensic
attacks often were designed to remove forgery traces left by a single
manipulation operation as opposed to a set of manipulations. Additionally,
recent research has shown that existing anti-forensic attacks against forensic
CNNs have poor transferability, i.e. they are unable to fool other forensic
CNNs that were not explicitly used during training. In this paper, we propose a
new anti-forensic attack framework designed to remove forensic traces left by a
variety of manipulation operations. This attack is transferable, i.e. it can be
used to attack forensic CNNs are unknown to the attacker, and it introduces
only minimal distortions that are imperceptible to human eyes. Our proposed
attack utilizes a generative adversarial network (GAN) to build a generator
that can attack color images of any size. We achieve attack transferability
through the use of a new training strategy and loss function. We conduct
extensive experiment to demonstrate that our attack can fool many state-of-art
forensic CNNs with varying levels of knowledge available to the attacker.
Related papers
- Attacking Image Splicing Detection and Localization Algorithms Using
Synthetic Traces [17.408491376238008]
Recent advances in deep learning have enabled forensics researchers to develop a new class of image splicing detection and localization algorithms.
These algorithms identify spliced content by detecting localized inconsistencies in forensic traces using Siamese neural networks.
In this paper, we propose a new GAN-based anti-forensic attack that is able to fool state-of-the-art splicing detection and localization algorithms.
arXiv Detail & Related papers (2022-11-22T15:07:16Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Trace and Detect Adversarial Attacks on CNNs using Feature Response Maps [0.3437656066916039]
adversarial attacks on convolutional neural networks (CNN)
In this work, we propose a novel detection method for adversarial examples to prevent attacks.
We do so by tracking adversarial perturbations in feature responses, allowing for automatic detection using average local spatial entropy.
arXiv Detail & Related papers (2022-08-24T11:05:04Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z) - Making GAN-Generated Images Difficult To Spot: A New Attack Against
Synthetic Image Detectors [24.809185168969066]
We propose a new anti-forensic attack capable of fooling GAN-generated image detectors.
Our attack uses an adversarially trained generator to synthesize traces that these detectors associate with real images.
We show that our attack can fool eight state-of-the-art detection CNNs with synthetic images created using seven different GANs.
arXiv Detail & Related papers (2021-04-25T05:56:57Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - The Effect of Class Definitions on the Transferability of Adversarial
Attacks Against Forensic CNNs [24.809185168969066]
We show that adversarial attacks against CNNs trained to identify image manipulation fail to transfer to CNNs whose only difference is in the class definitions.
This has important implications for the future design of forensic CNNs that are robust to adversarial and anti-forensic attacks.
arXiv Detail & Related papers (2021-01-26T20:59:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.