Understanding the Decision Boundary of Deep Neural Networks: An
Empirical Study
- URL: http://arxiv.org/abs/2002.01810v1
- Date: Wed, 5 Feb 2020 14:34:22 GMT
- Title: Understanding the Decision Boundary of Deep Neural Networks: An
Empirical Study
- Authors: David Mickisch, Felix Assion, Florens Gre{\ss}ner, Wiebke G\"unther,
Mariele Motta
- Abstract summary: We study the minimum distance of data points to the decision boundary and how this margin evolves over the training of a deep neural network.
We observe that the decision boundary moves closer to natural images over training.
On the other hand, adversarial training appears to have the potential to prevent this undesired convergence of the decision boundary.
- Score: 0.4499833362998487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite achieving remarkable performance on many image classification tasks,
state-of-the-art machine learning (ML) classifiers remain vulnerable to small
input perturbations. Especially, the existence of adversarial examples raises
concerns about the deployment of ML models in safety- and security-critical
environments, like autonomous driving and disease detection. Over the last few
years, numerous defense methods have been published with the goal of improving
adversarial as well as corruption robustness. However, the proposed measures
succeeded only to a very limited extent. This limited progress is partly due to
the lack of understanding of the decision boundary and decision regions of deep
neural networks. Therefore, we study the minimum distance of data points to the
decision boundary and how this margin evolves over the training of a deep
neural network. By conducting experiments on MNIST, FASHION-MNIST, and
CIFAR-10, we observe that the decision boundary moves closer to natural images
over training. This phenomenon even remains intact in the late epochs of
training, where the classifier already obtains low training and test error
rates. On the other hand, adversarial training appears to have the potential to
prevent this undesired convergence of the decision boundary.
Related papers
- Understanding Human Activity with Uncertainty Measure for Novelty in Graph Convolutional Networks [2.223052975765005]
We introduce the Temporal Fusion Graph Convolutional Network.
It aims to rectify the inadequate boundary estimation of individual actions within an activity stream.
It also mitigates the issue of over-segmentation in the temporal dimension.
arXiv Detail & Related papers (2024-10-10T13:44:18Z) - Computability of Classification and Deep Learning: From Theoretical Limits to Practical Feasibility through Quantization [53.15874572081944]
We study computability in the deep learning framework from two perspectives.
We show algorithmic limitations in training deep neural networks even in cases where the underlying problem is well-behaved.
Finally, we show that in quantized versions of classification and deep network training, computability restrictions do not arise or can be overcome to a certain degree.
arXiv Detail & Related papers (2024-08-12T15:02:26Z) - Layer-Aware Analysis of Catastrophic Overfitting: Revealing the Pseudo-Robust Shortcut Dependency [61.394997313144394]
Catastrophic overfitting (CO) presents a significant challenge in single-step adversarial training (AT)
We show that during CO, the former layers are more susceptible, experiencing earlier and greater distortion, while the latter layers show relative insensitivity.
Our proposed method, Layer-Aware Adversarial Weight Perturbation (LAP), can effectively prevent CO and further enhance robustness.
arXiv Detail & Related papers (2024-05-25T14:56:30Z) - Improving classifier decision boundaries using nearest neighbors [1.8592384822257952]
We show that neural networks are not learning optimal decision boundaries.
We employ various self-trained and pre-trained convolutional neural networks to show that our approach improves (i) resistance to label noise, (ii) robustness against adversarial attacks, (iii) classification accuracy, and to some degree even (iv) interpretability.
arXiv Detail & Related papers (2023-10-05T22:11:52Z) - On the Minimal Adversarial Perturbation for Deep Neural Networks with
Provable Estimation Error [65.51757376525798]
The existence of adversarial perturbations has opened an interesting research line on provable robustness.
No provable results have been presented to estimate and bound the error committed.
This paper proposes two lightweight strategies to find the minimal adversarial perturbation.
The obtained results show that the proposed strategies approximate the theoretical distance and robustness for samples close to the classification, leading to provable guarantees against any adversarial attacks.
arXiv Detail & Related papers (2022-01-04T16:40:03Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Improving adversarial robustness of deep neural networks by using
semantic information [17.887586209038968]
Adrial training is the main method for improving adversarial robustness and the first line of defense against adversarial attacks.
This paper provides a new perspective on the issue of adversarial robustness, one that shifts the focus from the network as a whole to the critical part of the region close to the decision boundary corresponding to a given class.
Experimental results on the MNIST and CIFAR-10 datasets show that this approach greatly improves adversarial robustness even using a very small dataset from the training data.
arXiv Detail & Related papers (2020-08-18T10:23:57Z) - Hold me tight! Influence of discriminative features on deep network
boundaries [63.627760598441796]
We propose a new perspective that relates dataset features to the distance of samples to the decision boundary.
This enables us to carefully tweak the position of the training samples and measure the induced changes on the boundaries of CNNs trained on large-scale vision datasets.
arXiv Detail & Related papers (2020-02-15T09:29:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.