Robust Ensemble Model Training via Random Layer Sampling Against
Adversarial Attack
- URL: http://arxiv.org/abs/2005.10757v2
- Date: Wed, 27 Jan 2021 13:20:57 GMT
- Title: Robust Ensemble Model Training via Random Layer Sampling Against
Adversarial Attack
- Authors: Hakmin Lee, Hong Joo Lee, Seong Tae Kim, Yong Man Ro
- Abstract summary: We propose an ensemble model training framework with random layer sampling to improve the robustness of deep neural networks.
In the proposed training framework, we generate various sampled model through the random layer sampling and update the weight of the sampled model.
After the ensemble models are trained, it can hide the gradient efficiently and avoid the gradient-based attack.
- Score: 38.1887818626171
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have achieved substantial achievements in several
computer vision areas, but have vulnerabilities that are often fooled by
adversarial examples that are not recognized by humans. This is an important
issue for security or medical applications. In this paper, we propose an
ensemble model training framework with random layer sampling to improve the
robustness of deep neural networks. In the proposed training framework, we
generate various sampled model through the random layer sampling and update the
weight of the sampled model. After the ensemble models are trained, it can hide
the gradient efficiently and avoid the gradient-based attack by the random
layer sampling method. To evaluate our proposed method, comprehensive and
comparative experiments have been conducted on three datasets. Experimental
results show that the proposed method improves the adversarial robustness.
Related papers
- Improving Adversarial Robustness for 3D Point Cloud Recognition at Test-Time through Purified Self-Training [9.072521170921712]
3D point cloud deep learning model is vulnerable to adversarial attacks.
adversarial purification employs generative model to mitigate the impact of adversarial attacks.
We propose a test-time purified self-training strategy to achieve this objective.
arXiv Detail & Related papers (2024-09-23T11:46:38Z) - Importance Sampling for Stochastic Gradient Descent in Deep Neural
Networks [0.0]
Importance sampling for training deep neural networks has been widely studied.
This paper reviews the challenges inherent to this research area.
We propose a metric allowing the assessment of the quality of a given sampling scheme.
arXiv Detail & Related papers (2023-03-29T08:35:11Z) - Robust Binary Models by Pruning Randomly-initialized Networks [57.03100916030444]
We propose ways to obtain robust models against adversarial attacks from randomly-d binary networks.
We learn the structure of the robust model by pruning a randomly-d binary network.
Our method confirms the strong lottery ticket hypothesis in the presence of adversarial attacks.
arXiv Detail & Related papers (2022-02-03T00:05:08Z) - Adversarial Examples Detection with Bayesian Neural Network [57.185482121807716]
We propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors.
We propose a novel Bayesian adversarial example detector, short for BATer, to improve the performance of adversarial example detection.
arXiv Detail & Related papers (2021-05-18T15:51:24Z) - Jo-SRC: A Contrastive Approach for Combating Noisy Labels [58.867237220886885]
We propose a noise-robust approach named Jo-SRC (Joint Sample Selection and Model Regularization based on Consistency)
Specifically, we train the network in a contrastive learning manner. Predictions from two different views of each sample are used to estimate its "likelihood" of being clean or out-of-distribution.
arXiv Detail & Related papers (2021-03-24T07:26:07Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Improving Adversarial Robustness by Enforcing Local and Global
Compactness [19.8818435601131]
Adversary training is the most successful method that consistently resists a wide range of attacks.
We propose the Adversary Divergence Reduction Network which enforces local/global compactness and the clustering assumption.
The experimental results demonstrate that augmenting adversarial training with our proposed components can further improve the robustness of the network.
arXiv Detail & Related papers (2020-07-10T00:43:06Z) - Dropout Strikes Back: Improved Uncertainty Estimation via Diversity
Sampling [3.077929914199468]
We show that modifying the sampling distributions for dropout layers in neural networks improves the quality of uncertainty estimation.
Our main idea consists of two main steps: computing data-driven correlations between neurons and generating samples, which include maximally diverse neurons.
arXiv Detail & Related papers (2020-03-06T15:20:04Z) - Regularizers for Single-step Adversarial Training [49.65499307547198]
We propose three types of regularizers that help to learn robust models using single-step adversarial training methods.
Regularizers mitigate the effect of gradient masking by harnessing on properties that differentiate a robust model from that of a pseudo robust model.
arXiv Detail & Related papers (2020-02-03T09:21:04Z) - Unseen Face Presentation Attack Detection Using Class-Specific Sparse
One-Class Multiple Kernel Fusion Regression [15.000818334408802]
The paper addresses face presentation attack detection in the challenging conditions of an unseen attack scenario.
A pure one-class face presentation attack detection approach based on kernel regression is developed.
arXiv Detail & Related papers (2019-12-31T11:53:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.