Group-wise Inhibition based Feature Regularization for Robust
Classification
- URL: http://arxiv.org/abs/2103.02152v1
- Date: Wed, 3 Mar 2021 03:19:32 GMT
- Title: Group-wise Inhibition based Feature Regularization for Robust
Classification
- Authors: Haozhe Liu, Haoqian Wu, Weicheng Xie, Feng Liu and Linlin Shen
- Abstract summary: Vanilla convolutional neural network (CNN) is vulnerable to images with small variations.
We propose to dynamically suppress significant activation values of vanilla CNN by group-wise inhibition.
We also show that the proposed regularization method complements other defense paradigms, such as adversarial training.
- Score: 21.637459331646088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The vanilla convolutional neural network (CNN) is vulnerable to images with
small variations (e.g. corrupted and adversarial samples). One of the possible
reasons is that CNN pays more attention to the most discriminative regions, but
ignores the auxiliary features, leading to the lack of feature diversity. In
our method , we propose to dynamically suppress significant activation values
of vanilla CNN by group-wise inhibition, but not fix or randomly handle them
when training. Then, the feature maps with different activation distribution
are processed separately due to the independence of features. Vanilla CNN is
finally guided to learn more rich discriminative features hierarchically for
robust classification according to proposed regularization. The proposed method
is able to achieve a significant gain of robustness over 15% comparing with the
state-of-the-art. We also show that the proposed regularization method
complements other defense paradigms, such as adversarial training, to further
improve the robustness.
Related papers
- Rethinking Weak-to-Strong Augmentation in Source-Free Domain Adaptive Object Detection [38.596886094105216]
Source-Free domain adaptive Object Detection (SFOD) aims to transfer a detector (pre-trained on source domain) to new unlabelled target domains.
This paper introduces a novel Weak-to-Strong Contrastive Learning (WSCoL) approach.
arXiv Detail & Related papers (2024-10-07T23:32:06Z) - MOREL: Enhancing Adversarial Robustness through Multi-Objective Representation Learning [1.534667887016089]
deep neural networks (DNNs) are vulnerable to slight adversarial perturbations.
We show that strong feature representation learning during training can significantly enhance the original model's robustness.
We propose MOREL, a multi-objective feature representation learning approach, encouraging classification models to produce similar features for inputs within the same class, despite perturbations.
arXiv Detail & Related papers (2024-10-02T16:05:03Z) - Clustering Effect of (Linearized) Adversarial Robust Models [60.25668525218051]
We propose a novel understanding of adversarial robustness and apply it on more tasks including domain adaption and robustness boosting.
Experimental evaluations demonstrate the rationality and superiority of our proposed clustering strategy.
arXiv Detail & Related papers (2021-11-25T05:51:03Z) - An Orthogonal Classifier for Improving the Adversarial Robustness of
Neural Networks [21.13588742648554]
Recent efforts have shown that imposing certain modifications on classification layer can improve the robustness of the neural networks.
We explicitly construct a dense orthogonal weight matrix whose entries have the same magnitude, leading to a novel robust classifier.
Our method is efficient and competitive to many state-of-the-art defensive approaches.
arXiv Detail & Related papers (2021-05-19T13:12:14Z) - CIFS: Improving Adversarial Robustness of CNNs via Channel-wise
Importance-based Feature Selection [186.34889055196925]
We investigate the adversarial robustness of CNNs from the perspective of channel-wise activations.
We observe that adversarial training (AT) robustifies CNNs by aligning the channel-wise activations of adversarial data with those of their natural counterparts.
We introduce a novel mechanism, i.e., underlineChannel-wise underlineImportance-based underlineFeature underlineSelection (CIFS)
arXiv Detail & Related papers (2021-02-10T08:16:43Z) - DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial
Estimation [109.11580756757611]
Deep ensembles perform better than a single network thanks to the diversity among their members.
Recent approaches regularize predictions to increase diversity; however, they also drastically decrease individual members' performances.
We introduce a novel training criterion called DICE: it increases diversity by reducing spurious correlations among features.
arXiv Detail & Related papers (2021-01-14T10:53:26Z) - DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of
Ensembles [20.46399318111058]
Adversarial attacks can mislead CNN models with small perturbations, which can effectively transfer between different models trained on the same dataset.
We propose DVERGE, which isolates the adversarial vulnerability in each sub-model by distilling non-robust features.
The novel diversity metric and training procedure enables DVERGE to achieve higher robustness against transfer attacks.
arXiv Detail & Related papers (2020-09-30T14:57:35Z) - ACDC: Weight Sharing in Atom-Coefficient Decomposed Convolution [57.635467829558664]
We introduce a structural regularization across convolutional kernels in a CNN.
We show that CNNs now maintain performance with dramatic reduction in parameters and computations.
arXiv Detail & Related papers (2020-09-04T20:41:47Z) - Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder [11.701729403940798]
We propose an attack-agnostic defence framework to enhance the intrinsic robustness of neural networks.
Our framework applies to all block-based convolutional neural networks (CNNs)
arXiv Detail & Related papers (2020-05-06T01:40:26Z) - Regularizing Class-wise Predictions via Self-knowledge Distillation [80.76254453115766]
We propose a new regularization method that penalizes the predictive distribution between similar samples.
This results in regularizing the dark knowledge (i.e., the knowledge on wrong predictions) of a single network.
Our experimental results on various image classification tasks demonstrate that the simple yet powerful method can significantly improve the generalization ability.
arXiv Detail & Related papers (2020-03-31T06:03:51Z) - Hold me tight! Influence of discriminative features on deep network
boundaries [63.627760598441796]
We propose a new perspective that relates dataset features to the distance of samples to the decision boundary.
This enables us to carefully tweak the position of the training samples and measure the induced changes on the boundaries of CNNs trained on large-scale vision datasets.
arXiv Detail & Related papers (2020-02-15T09:29:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.