A Perturbation Resistant Transformation and Classification System for
Deep Neural Networks
- URL: http://arxiv.org/abs/2208.11839v1
- Date: Thu, 25 Aug 2022 02:58:47 GMT
- Title: A Perturbation Resistant Transformation and Classification System for
Deep Neural Networks
- Authors: Nathaniel Dean, Dilip Sarkar
- Abstract summary: Deep convolutional neural networks accurately classify a diverse range of natural images, but may be easily deceived when designed.
In this paper, we design a multi-pronged training, unbounded input transformation, and image ensemble system that is attack and not easily estimated.
- Score: 0.685316573653194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep convolutional neural networks accurately classify a diverse range of
natural images, but may be easily deceived when designed, imperceptible
perturbations are embedded in the images. In this paper, we design a
multi-pronged training, input transformation, and image ensemble system that is
attack agnostic and not easily estimated. Our system incorporates two novel
features. The first is a transformation layer that computes feature level
polynomial kernels from class-level training data samples and iteratively
updates input image copies at inference time based on their feature kernel
differences to create an ensemble of transformed inputs. The second is a
classification system that incorporates the prediction of the undefended
network with a hard vote on the ensemble of filtered images. Our evaluations on
the CIFAR10 dataset show our system improves the robustness of an undefended
network against a variety of bounded and unbounded white-box attacks under
different distance metrics, while sacrificing little accuracy on clean images.
Against adaptive full-knowledge attackers creating end-to-end attacks, our
system successfully augments the existing robustness of adversarially trained
networks, for which our methods are most effectively applied.
Related papers
- Unsupervised convolutional neural network fusion approach for change
detection in remote sensing images [1.892026266421264]
We introduce a completely unsupervised shallow convolutional neural network (USCNN) fusion approach for change detection.
Our model has three features: the entire training process is conducted in an unsupervised manner, the network architecture is shallow, and the objective function is sparse.
Experimental results on four real remote sensing datasets indicate the feasibility and effectiveness of the proposed approach.
arXiv Detail & Related papers (2023-11-07T03:10:17Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Fine-grained Recognition with Learnable Semantic Data Augmentation [68.48892326854494]
Fine-grained image recognition is a longstanding computer vision challenge.
We propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem.
Our method significantly improves the generalization performance on several popular classification networks.
arXiv Detail & Related papers (2023-09-01T11:15:50Z) - Random Weights Networks Work as Loss Prior Constraint for Image
Restoration [50.80507007507757]
We present our belief Random Weights Networks can be Acted as Loss Prior Constraint for Image Restoration''
Our belief can be directly inserted into existing networks without any training and testing computational cost.
To emphasize, our main focus is to spark the realms of loss function and save their current neglected status.
arXiv Detail & Related papers (2023-03-29T03:43:51Z) - Adversarial Sampling for Fairness Testing in Deep Neural Network [0.0]
adversarial sampling to test for fairness in prediction of deep neural network model across different classes of image in a given dataset.
We trained our neural network model on the original image, and without training our model on the perturbed or attacked image.
When we feed the adversarial samplings to our model, it was able to predict the original category/ class of the image the adversarial sample belongs to.
arXiv Detail & Related papers (2023-03-06T03:55:37Z) - ResMLP: Feedforward networks for image classification with
data-efficient training [73.26364887378597]
We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification.
We will share our code based on the Timm library and pre-trained models.
arXiv Detail & Related papers (2021-05-07T17:31:44Z) - Deep Features for training Support Vector Machine [16.795405355504077]
This paper develops a generic computer vision system based on features extracted from trained CNNs.
Multiple learned features are combined into a single structure to work on different image classification tasks.
arXiv Detail & Related papers (2021-04-08T03:13:09Z) - Learning degraded image classification with restoration data fidelity [0.0]
We investigate the influence of degradation types and levels on four widely-used classification networks.
We propose a novel method leveraging a fidelity map to calibrate the image features obtained by pre-trained networks.
Our results reveal that the proposed method is a promising solution to mitigate the effect caused by image degradation.
arXiv Detail & Related papers (2021-01-23T23:47:03Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z) - Verification of Deep Convolutional Neural Networks Using ImageStars [10.44732293654293]
Convolutional Neural Networks (CNN) have redefined the state-of-the-art in many real-world applications.
CNNs are vulnerable to adversarial attacks, where slight changes to their inputs may lead to sharp changes in their output.
We describe a set-based framework that successfully deals with real-world CNNs, such as VGG16 and VGG19, that have high accuracy on ImageNet.
arXiv Detail & Related papers (2020-04-12T00:37:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.