Improving the Robustness of Deep Convolutional Neural Networks Through
Feature Learning
- URL: http://arxiv.org/abs/2303.06425v1
- Date: Sat, 11 Mar 2023 15:22:29 GMT
- Title: Improving the Robustness of Deep Convolutional Neural Networks Through
Feature Learning
- Authors: Jin Ding, Jie-Chao Zhao, Yong-Zhi Sun, Ping Tan, Ji-En Ma, You-Tong
Fang
- Abstract summary: Deep convolutional neural network (DCNN for short) models are vulnerable to examples with small perturbations.
Adversarial training (AT for short) is a widely used approach to enhance the robustness of DCNN models by data augmentation.
This paper proposes a shallow binary feature module (SBFM for short) which can be integrated into any popular backbone.
- Score: 23.5067878531607
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep convolutional neural network (DCNN for short) models are vulnerable to
examples with small perturbations. Adversarial training (AT for short) is a
widely used approach to enhance the robustness of DCNN models by data
augmentation. In AT, the DCNN models are trained with clean examples and
adversarial examples (AE for short) which are generated using a specific attack
method, aiming to gain ability to defend themselves when facing the unseen AEs.
However, in practice, the trained DCNN models are often fooled by the AEs
generated by the novel attack methods. This naturally raises a question: can a
DCNN model learn certain features which are insensitive to small perturbations,
and further defend itself no matter what attack methods are presented. To
answer this question, this paper makes a beginning effort by proposing a
shallow binary feature module (SBFM for short), which can be integrated into
any popular backbone. The SBFM includes two types of layers, i.e., Sobel layer
and threshold layer. In Sobel layer, there are four parallel feature maps which
represent horizontal, vertical, and diagonal edge features, respectively. And
in threshold layer, it turns the edge features learnt by Sobel layer to the
binary features, which then are feeded into the fully connected layers for
classification with the features learnt by the backbone. We integrate SBFM into
VGG16 and ResNet34, respectively, and conduct experiments on multiple datasets.
Experimental results demonstrate, under FGSM attack with $\epsilon=8/255$, the
SBFM integrated models can achieve averagely 35\% higher accuracy than the
original ones, and in CIFAR-10 and TinyImageNet datasets, the SBFM integrated
models can achieve averagely 75\% classification accuracy. The work in this
paper shows it is promising to enhance the robustness of DCNN models through
feature learning.
Related papers
- MOREL: Enhancing Adversarial Robustness through Multi-Objective Representation Learning [1.534667887016089]
deep neural networks (DNNs) are vulnerable to slight adversarial perturbations.
We show that strong feature representation learning during training can significantly enhance the original model's robustness.
We propose MOREL, a multi-objective feature representation learning approach, encouraging classification models to produce similar features for inputs within the same class, despite perturbations.
arXiv Detail & Related papers (2024-10-02T16:05:03Z) - Edge Detectors Can Make Deep Convolutional Neural Networks More Robust [25.871767605100636]
This paper first employs the edge detectors as layer kernels and designs a binary edge feature branch (BEFB) to learn the binary edge features.
The accuracy of the BEFB integrated models is better than the original ones on all datasets when facing FGSM, PGD, and C&W attacks.
The work in this paper for the first time shows it is feasible to enhance the robustness of DCNNs through combining both shape-like features and texture features.
arXiv Detail & Related papers (2024-02-26T10:54:26Z) - A model for multi-attack classification to improve intrusion detection
performance using deep learning approaches [0.0]
The objective here is to create a reliable intrusion detection mechanism to help identify malicious attacks.
Deep learning based solution framework is developed consisting of three approaches.
The first approach is Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) with seven functions such as adamax, SGD, adagrad, adam, RMSprop, nadam and adadelta.
The models self-learnt the features and classifies the attack classes as multi-attack classification.
arXiv Detail & Related papers (2023-10-25T05:38:44Z) - Layer-wise Linear Mode Connectivity [52.6945036534469]
Averaging neural network parameters is an intuitive method for the knowledge of two independent models.
It is most prominently used in federated learning.
We analyse the performance of the models that result from averaging single, or groups.
arXiv Detail & Related papers (2023-07-13T09:39:10Z) - A Gradient Boosting Approach for Training Convolutional and Deep Neural
Networks [0.0]
We introduce two procedures for training Convolutional Neural Networks (CNNs) and Deep Neural Network based on Gradient Boosting (GB)
The presented models show superior performance in terms of classification accuracy with respect to standard CNN and Deep-NN with the same architectures.
arXiv Detail & Related papers (2023-02-22T12:17:32Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - FOSTER: Feature Boosting and Compression for Class-Incremental Learning [52.603520403933985]
Deep neural networks suffer from catastrophic forgetting when learning new categories.
We propose a novel two-stage learning paradigm FOSTER, empowering the model to learn new categories adaptively.
arXiv Detail & Related papers (2022-04-10T11:38:33Z) - Eigen-CAM: Class Activation Map using Principal Components [1.2691047660244335]
This paper builds on previous ideas to cope with the increasing demand for interpretable, robust, and transparent models.
The proposed Eigen-CAM computes and visualizes the principle components of the learned features/representations from the convolutional layers.
arXiv Detail & Related papers (2020-08-01T17:14:13Z) - When Residual Learning Meets Dense Aggregation: Rethinking the
Aggregation of Deep Neural Networks [57.0502745301132]
We propose Micro-Dense Nets, a novel architecture with global residual learning and local micro-dense aggregations.
Our micro-dense block can be integrated with neural architecture search based models to boost their performance.
arXiv Detail & Related papers (2020-04-19T08:34:52Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.