Efficient detection of adversarial images
- URL: http://arxiv.org/abs/2007.04564v1
- Date: Thu, 9 Jul 2020 05:35:49 GMT
- Title: Efficient detection of adversarial images
- Authors: Darpan Kumar Yadav, Kartik Mundra, Rahul Modpur, Arpan Chattopadhyay
and Indra Narayan Kar
- Abstract summary: Some or all pixel values of an image are modified by an external attacker, so that the change is almost invisible to the human eye.
This paper first proposes a novel pre-processing technique that facilitates the detection of such modified images.
An adaptive version of this algorithm is proposed where a random number of perturbations are chosen adaptively.
- Score: 2.6249027950824506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, detection of deception attack on deep neural network (DNN)
based image classification in autonomous and cyber-physical systems is
considered. Several studies have shown the vulnerability of DNN to malicious
deception attacks. In such attacks, some or all pixel values of an image are
modified by an external attacker, so that the change is almost invisible to the
human eye but significant enough for a DNN-based classifier to misclassify it.
This paper first proposes a novel pre-processing technique that facilitates the
detection of such modified images under any DNN-based image classifier as well
as the attacker model. The proposed pre-processing algorithm involves a certain
combination of principal component analysis (PCA)-based decomposition of the
image, and random perturbation based detection to reduce computational
complexity. Next, an adaptive version of this algorithm is proposed where a
random number of perturbations are chosen adaptively using a doubly-threshold
policy, and the threshold values are learnt via stochastic approximation in
order to minimize the expected number of perturbations subject to constraints
on the false alarm and missed detection probabilities. Numerical experiments
show that the proposed detection scheme outperforms a competing algorithm while
achieving reasonably low computational complexity.
Related papers
- Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm [6.515472477685614]
The susceptibility of deep neural networks (DNNs) to adversarial attacks undermines their reliability across numerous applications.
We introduce the Enhanced Targeted DeepFool (ET DeepFool) algorithm, an evolution of DeepFool.
Our empirical investigations demonstrate the superiority of this refined approach in maintaining the integrity of images.
arXiv Detail & Related papers (2023-10-18T18:50:39Z) - RCDN -- Robust X-Corner Detection Algorithm based on Advanced CNN Model [3.580983453285039]
We present a novel detection algorithm which can maintain high sub-pixel precision on inputs under multiple interferences.
The whole algorithm, adopting a coarse-to-fine strategy, contains a X-corner detection network and three post-processing techniques.
Evaluations on real and synthetic images indicate that the presented algorithm has the higher detection rate, sub-pixel accuracy and robustness than other commonly used methods.
arXiv Detail & Related papers (2023-07-07T10:40:41Z) - Guided Diffusion Model for Adversarial Purification [103.4596751105955]
Adversarial attacks disturb deep neural networks (DNNs) in various algorithms and frameworks.
We propose a novel purification approach, referred to as guided diffusion model for purification (GDMP)
On our comprehensive experiments across various datasets, the proposed GDMP is shown to reduce the perturbations raised by adversarial attacks to a shallow range.
arXiv Detail & Related papers (2022-05-30T10:11:15Z) - Residue-Based Natural Language Adversarial Attack Detection [1.4213973379473654]
This work proposes a simple sentence-embedding "residue" based detector to identify adversarial examples.
On many tasks, it out-performs ported image domain detectors and recent state of the art NLP specific detectors.
arXiv Detail & Related papers (2022-04-17T17:47:47Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Hierarchical Convolutional Neural Network with Feature Preservation and
Autotuned Thresholding for Crack Detection [5.735035463793008]
Drone imagery is increasingly used in automated inspection for infrastructure surface defects.
This paper proposes a deep learning approach using hierarchical convolutional neural networks with feature preservation.
The proposed technique is then applied to identify surface cracks on the surface of roads, bridges or pavements.
arXiv Detail & Related papers (2021-04-21T13:07:58Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Determining Sequence of Image Processing Technique (IPT) to Detect
Adversarial Attacks [4.431353523758957]
We propose an evolutionary approach to automatically determine Image Processing Techniques Sequence (IPTS) for detecting malicious inputs.
A detection framework based on a genetic algorithm (GA) is developed to find the optimal IPTS.
A set of IPTS selected dynamically in testing time which works as a filter for the adversarial attack.
arXiv Detail & Related papers (2020-07-01T08:59:14Z) - RAIN: A Simple Approach for Robust and Accurate Image Classification
Networks [156.09526491791772]
It has been shown that the majority of existing adversarial defense methods achieve robustness at the cost of sacrificing prediction accuracy.
This paper proposes a novel preprocessing framework, which we term Robust and Accurate Image classificatioN(RAIN)
RAIN applies randomization over inputs to break the ties between the model forward prediction path and the backward gradient path, thus improving the model robustness.
We conduct extensive experiments on the STL10 and ImageNet datasets to verify the effectiveness of RAIN against various types of adversarial attacks.
arXiv Detail & Related papers (2020-04-24T02:03:56Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.