Towards Robust Deep Learning with Ensemble Networks and Noisy Layers
- URL: http://arxiv.org/abs/2007.01507v2
- Date: Wed, 6 Jan 2021 18:05:57 GMT
- Title: Towards Robust Deep Learning with Ensemble Networks and Noisy Layers
- Authors: Yuting Liang, Reza Samavi
- Abstract summary: We provide an approach for deep learning that protects against adversarial examples in image classification-type networks.
The approach relies on two mechanisms:1) a mechanism that increases robustness at the expense of accuracy, and, 2) a mechanism that improves accuracy but does not always increase robustness.
- Score: 2.2843885788439793
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we provide an approach for deep learning that protects against
adversarial examples in image classification-type networks. The approach relies
on two mechanisms:1) a mechanism that increases robustness at the expense of
accuracy, and, 2) a mechanism that improves accuracy but does not always
increase robustness. We show that an approach combining the two mechanisms can
provide protection against adversarial examples while retaining accuracy. We
formulate potential attacks on our approach with experimental results to
demonstrate its effectiveness. We also provide a robustness guarantee for our
approach along with an interpretation for the guarantee.
Related papers
- Exploring the Adversarial Frontier: Quantifying Robustness via Adversarial Hypervolume [17.198794644483026]
We propose a new metric termed adversarial hypervolume, assessing the robustness of deep learning models comprehensively over a range of perturbation intensities.
We adopt a novel training algorithm that enhances adversarial robustness uniformly across various perturbation intensities.
This research contributes a new measure of robustness and establishes a standard for assessing benchmarking and the resilience of current and future defensive models against adversarial threats.
arXiv Detail & Related papers (2024-03-08T07:03:18Z) - Towards Robust Semantic Segmentation against Patch-based Attack via Attention Refinement [68.31147013783387]
We observe that the attention mechanism is vulnerable to patch-based adversarial attacks.
In this paper, we propose a Robust Attention Mechanism (RAM) to improve the robustness of the semantic segmentation model.
arXiv Detail & Related papers (2024-01-03T13:58:35Z) - PGN: A perturbation generation network against deep reinforcement
learning [8.546103661706391]
We propose a novel generative model for creating effective adversarial examples to attack the agent.
Considering the specificity of deep reinforcement learning, we propose the action consistency ratio as a measure of stealthiness.
arXiv Detail & Related papers (2023-12-20T10:40:41Z) - Boosting Adversarial Robustness using Feature Level Stochastic Smoothing [46.86097477465267]
adversarial defenses have led to a significant improvement in the robustness of Deep Neural Networks.
In this work, we propose a generic method for introducingity in the network predictions.
We also utilize this for smoothing decision rejecting low confidence predictions.
arXiv Detail & Related papers (2023-06-10T15:11:24Z) - Adversarial Purification with the Manifold Hypothesis [14.085013765853226]
We develop an adversarial purification method with this framework.
Our approach can provide adversarial robustness even if attackers are aware of the existence of the defense.
arXiv Detail & Related papers (2022-10-26T01:00:57Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Adversarial Robustness under Long-Tailed Distribution [93.50792075460336]
Adversarial robustness has attracted extensive studies recently by revealing the vulnerability and intrinsic characteristics of deep networks.
In this work we investigate the adversarial vulnerability as well as defense under long-tailed distributions.
We propose a clean yet effective framework, RoBal, which consists of two dedicated modules, a scale-invariant and data re-balancing.
arXiv Detail & Related papers (2021-04-06T17:53:08Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Hidden Cost of Randomized Smoothing [72.93630656906599]
In this paper, we point out the side effects of current randomized smoothing.
Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives.
arXiv Detail & Related papers (2020-03-02T23:37:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.