Defense against adversarial attacks on deep convolutional neural
networks through nonlocal denoising
- URL: http://arxiv.org/abs/2206.12685v1
- Date: Sat, 25 Jun 2022 16:11:25 GMT
- Title: Defense against adversarial attacks on deep convolutional neural
networks through nonlocal denoising
- Authors: Sandhya Aneja and Nagender Aneja and Pg Emeroylariffion Abas and Abdul
Ghani Naim
- Abstract summary: A nonlocal denoising method with different luminance values has been used to generate adversarial examples.
Under perturbation, the method provided absolute accuracy improvements of up to 9.3% in the MNIST data set.
We have shown that transfer learning is disadvantageous for adversarial machine learning.
- Score: 1.3484794751207887
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite substantial advances in network architecture performance, the
susceptibility of adversarial attacks makes deep learning challenging to
implement in safety-critical applications. This paper proposes a data-centric
approach to addressing this problem. A nonlocal denoising method with different
luminance values has been used to generate adversarial examples from the
Modified National Institute of Standards and Technology database (MNIST) and
Canadian Institute for Advanced Research (CIFAR-10) data sets. Under
perturbation, the method provided absolute accuracy improvements of up to 9.3%
in the MNIST data set and 13% in the CIFAR-10 data set. Training using
transformed images with higher luminance values increases the robustness of the
classifier. We have shown that transfer learning is disadvantageous for
adversarial machine learning. The results indicate that simple adversarial
examples can improve resilience and make deep learning easier to apply in
various applications.
Related papers
- Towards Robust Out-of-Distribution Generalization: Data Augmentation and Neural Architecture Search Approaches [4.577842191730992]
We study ways toward robust OoD generalization for deep learning.
We first propose a novel and effective approach to disentangle the spurious correlation between features that are not essential for recognition.
We then study the problem of strengthening neural architecture search in OoD scenarios.
arXiv Detail & Related papers (2024-10-25T20:50:32Z) - Nonlinear Transformations Against Unlearnable Datasets [4.876873339297269]
Automated scraping stands out as a common method for collecting data in deep learning models without the authorization of data owners.
Recent studies have begun to tackle the privacy concerns associated with this data collection method.
The data generated by those approaches, called "unlearnable" examples, are prevented "learning" by deep learning models.
arXiv Detail & Related papers (2024-06-05T03:00:47Z) - Improving Robustness to Model Inversion Attacks via Sparse Coding Architectures [4.962316236417777]
Recent model inversion attack algorithms permit adversaries to reconstruct a neural network's private and potentially sensitive training data by repeatedly querying the network.
We develop a novel network architecture that leverages sparse-coding layers to obtain superior robustness to this class of attacks.
arXiv Detail & Related papers (2024-03-21T18:26:23Z) - SIRST-5K: Exploring Massive Negatives Synthesis with Self-supervised
Learning for Robust Infrared Small Target Detection [53.19618419772467]
Single-frame infrared small target (SIRST) detection aims to recognize small targets from clutter backgrounds.
With the development of Transformer, the scale of SIRST models is constantly increasing.
With a rich diversity of infrared small target data, our algorithm significantly improves the model performance and convergence speed.
arXiv Detail & Related papers (2024-03-08T16:14:54Z) - Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - Evaluating Membership Inference Through Adversarial Robustness [6.983991370116041]
We propose an enhanced methodology for membership inference attacks based on adversarial robustness.
We evaluate our proposed method on three datasets: Fashion-MNIST, CIFAR-10, and CIFAR-100.
arXiv Detail & Related papers (2022-05-14T06:48:47Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - Mitigating the Impact of Adversarial Attacks in Very Deep Networks [10.555822166916705]
Deep Neural Network (DNN) models have vulnerabilities related to security concerns.
Data poisoning-enabled perturbation attacks are complex adversarial ones that inject false data into models.
We propose an attack-agnostic-based defense method for mitigating their influence.
arXiv Detail & Related papers (2020-12-08T21:25:44Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.