Towards Understanding and Harnessing the Effect of Image Transformation
in Adversarial Detection
- URL: http://arxiv.org/abs/2201.01080v1
- Date: Tue, 4 Jan 2022 10:58:59 GMT
- Title: Towards Understanding and Harnessing the Effect of Image Transformation
in Adversarial Detection
- Authors: Hui Liu, Bo Zhao, Yuefeng Peng, Weidong Li, Peng Liu
- Abstract summary: Deep neural networks (DNNs) are under threat from adversarial examples.
Image transformation is one of the most effective approaches to detect adversarial examples.
We propose an improved approach by combining multiple image transformations.
- Score: 8.436194871428805
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) are under threat from adversarial examples.
Adversarial detection is a fundamental work for robust DNNs-based service,
which distinguishes adversarial images from benign images. Image transformation
is one of the most effective approaches to detect adversarial examples. During
the last few years, a variety of image transformations have been studied and
discussed to design reliable adversarial detectors. In this paper, we
systematically review the recent progress on adversarial detection via image
transformations with a novel taxonomy. Then we conduct an extensive set of
experiments to test the detection performance of image transformations towards
the state-of-the-art adversarial attacks. Furthermore, we reveal that the
single transformation is not capable of detecting robust adversarial examples,
and propose an improved approach by combining multiple image transformations.
The results show that the joint approach achieves significant improvement in
detection accuracy and recall. We suggest that the joint detector is a more
effective tool to detect adversarial examples.
Related papers
- MMNet: Multi-Collaboration and Multi-Supervision Network for Sequential
Deepfake Detection [81.59191603867586]
Sequential deepfake detection aims to identify forged facial regions with the correct sequence for recovery.
The recovery of forged images requires knowledge of the manipulation model to implement inverse transformations.
We propose Multi-Collaboration and Multi-Supervision Network (MMNet) that handles various spatial scales and sequential permutations in forged face images.
arXiv Detail & Related papers (2023-07-06T02:32:08Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Adaptive Image Transformations for Transfer-based Adversarial Attack [73.74904401540743]
We propose a novel architecture, called Adaptive Image Transformation Learner (AITL)
Our elaborately designed learner adaptively selects the most effective combination of image transformations specific to the input image.
Our method significantly improves the attack success rates on both normally trained models and defense models under various settings.
arXiv Detail & Related papers (2021-11-27T08:15:44Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Detecting Adversarial Examples by Input Transformations, Defense
Perturbations, and Voting [71.57324258813674]
convolutional neural networks (CNNs) have proved to reach super-human performance in visual recognition tasks.
CNNs can easily be fooled by adversarial examples, i.e., maliciously-crafted images that force the networks to predict an incorrect output.
This paper extensively explores the detection of adversarial examples via image transformations and proposes a novel methodology.
arXiv Detail & Related papers (2021-01-27T14:50:41Z) - Error Diffusion Halftoning Against Adversarial Examples [85.11649974840758]
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks into making wrong predictions.
We propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples.
arXiv Detail & Related papers (2021-01-23T07:55:02Z) - Unsupervised Change Detection in Satellite Images with Generative
Adversarial Network [20.81970476609318]
We propose a novel change detection framework utilizing a special neural network architecture -- Generative Adversarial Network (GAN) to generate better coregistered images.
The optimized GAN model would produce better coregistered images where changes can be easily spotted and then the change map can be presented through a comparison strategy.
arXiv Detail & Related papers (2020-09-08T10:26:04Z) - Learning Transformation-Aware Embeddings for Image Forensics [15.484408315588569]
Image Provenance Analysis aims at discovering relationships among different manipulated image versions that share content.
One of the main sub-problems for provenance analysis that has not yet been addressed directly is the edit ordering of images that share full content or are near-duplicates.
This paper introduces a novel deep learning-based approach to provide a plausible ordering to images that have been generated from a single image through transformations.
arXiv Detail & Related papers (2020-01-13T22:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.