Decamouflage: A Framework to Detect Image-Scaling Attacks on
Convolutional Neural Networks
- URL: http://arxiv.org/abs/2010.03735v1
- Date: Thu, 8 Oct 2020 02:30:55 GMT
- Title: Decamouflage: A Framework to Detect Image-Scaling Attacks on
Convolutional Neural Networks
- Authors: Bedeuro Kim, Alsharif Abuadbba, Yansong Gao, Yifeng Zheng, Muhammad
Ejaz Ahmed, Hyoungshick Kim, Surya Nepal
- Abstract summary: Image scaling functions could be adversarially abused to perform an attack called image-scaling attack.
This work presents an image-scaling attack detection framework, termed as Decamouflage.
Decamouflage consists of three independent detection methods: (1) rescaling, (2) filtering/pooling, and (3) steganalysis.
- Score: 35.30705616146299
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As an essential processing step in computer vision applications, image
resizing or scaling, more specifically downsampling, has to be applied before
feeding a normally large image into a convolutional neural network (CNN) model
because CNN models typically take small fixed-size images as inputs. However,
image scaling functions could be adversarially abused to perform a newly
revealed attack called image-scaling attack, which can affect a wide range of
computer vision applications building upon image-scaling functions.
This work presents an image-scaling attack detection framework, termed as
Decamouflage. Decamouflage consists of three independent detection methods: (1)
rescaling, (2) filtering/pooling, and (3) steganalysis. While each of these
three methods is efficient standalone, they can work in an ensemble manner not
only to improve the detection accuracy but also to harden potential adaptive
attacks. Decamouflage has a pre-determined detection threshold that is generic.
More precisely, as we have validated, the threshold determined from one dataset
is also applicable to other different datasets. Extensive experiments show that
Decamouflage achieves detection accuracy of 99.9\% and 99.8\% in the white-box
(with the knowledge of attack algorithms) and the black-box (without the
knowledge of attack algorithms) settings, respectively. To corroborate the
efficiency of Decamouflage, we have also measured its run-time overhead on a
personal PC with an i5 CPU and found that Decamouflage can detect image-scaling
attacks in milliseconds. Overall, Decamouflage can accurately detect image
scaling attacks in both white-box and black-box settings with acceptable
run-time overhead.
Related papers
- On the Detection of Image-Scaling Attacks in Machine Learning [11.103249083138213]
Image scaling is an integral part of machine learning and computer vision systems.
Image-scaling attacks modifying the entire scaled image can be reliably detected even under an adaptive adversary.
We show that our methods provide strong detection performance even if only minor parts of the image are manipulated.
arXiv Detail & Related papers (2023-10-23T16:46:28Z) - CCDN: Checkerboard Corner Detection Network for Robust Camera
Calibration [10.614480156920935]
checkerboard corner detection network and some post-processing techniques.
Network model is a fully convolutional network with improvements of loss function and learning rate.
In order to remove the false positives, we employ three post-processing techniques including threshold related to maximum response, non-maximum suppression, and clustering.
arXiv Detail & Related papers (2023-02-10T07:47:44Z) - Self-Supervised Masked Convolutional Transformer Block for Anomaly
Detection [122.4894940892536]
We present a novel self-supervised masked convolutional transformer block (SSMCTB) that comprises the reconstruction-based functionality at a core architectural level.
In this work, we extend our previous self-supervised predictive convolutional attentive block (SSPCAB) with a 3D masked convolutional layer, a transformer for channel-wise attention, as well as a novel self-supervised objective based on Huber loss.
arXiv Detail & Related papers (2022-09-25T04:56:10Z) - A Perturbation Resistant Transformation and Classification System for
Deep Neural Networks [0.685316573653194]
Deep convolutional neural networks accurately classify a diverse range of natural images, but may be easily deceived when designed.
In this paper, we design a multi-pronged training, unbounded input transformation, and image ensemble system that is attack and not easily estimated.
arXiv Detail & Related papers (2022-08-25T02:58:47Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - SpectralDefense: Detecting Adversarial Attacks on CNNs in the Fourier
Domain [10.418647759223964]
We show how analysis in the Fourier domain of input images and feature maps can be used to distinguish benign test samples from adversarial images.
We propose two novel detection methods.
arXiv Detail & Related papers (2021-03-04T12:48:28Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z) - Towards Dense People Detection with Deep Learning and Depth images [9.376814409561726]
This paper proposes a DNN-based system that detects multiple people from a single depth image.
Our neural network processes a depth image and outputs a likelihood map in image coordinates.
We show this strategy to be effective, producing networks that generalize to work with scenes different from those used during training.
arXiv Detail & Related papers (2020-07-14T16:43:02Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.