Boosting Adversarial Robustness using Feature Level Stochastic Smoothing
- URL: http://arxiv.org/abs/2306.06462v1
- Date: Sat, 10 Jun 2023 15:11:24 GMT
- Title: Boosting Adversarial Robustness using Feature Level Stochastic Smoothing
- Authors: Sravanti Addepalli, Samyak Jain, Gaurang Sriramanan, R. Venkatesh Babu
- Abstract summary: adversarial defenses have led to a significant improvement in the robustness of Deep Neural Networks.
In this work, we propose a generic method for introducingity in the network predictions.
We also utilize this for smoothing decision rejecting low confidence predictions.
- Score: 46.86097477465267
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in adversarial defenses have led to a significant improvement in the
robustness of Deep Neural Networks. However, the robust accuracy of present
state-ofthe-art defenses is far from the requirements in critical applications
such as robotics and autonomous navigation systems. Further, in practical use
cases, network prediction alone might not suffice, and assignment of a
confidence value for the prediction can prove crucial. In this work, we propose
a generic method for introducing stochasticity in the network predictions, and
utilize this for smoothing decision boundaries and rejecting low confidence
predictions, thereby boosting the robustness on accepted samples. The proposed
Feature Level Stochastic Smoothing based classification also results in a boost
in robustness without rejection over existing adversarial training methods.
Finally, we combine the proposed method with adversarial detection methods, to
achieve the benefits of both approaches.
Related papers
- Integrating uncertainty quantification into randomized smoothing based robustness guarantees [18.572496359670797]
Deep neural networks are vulnerable to adversarial attacks which can cause hazardous incorrect predictions in safety-critical applications.
Certified robustness via randomized smoothing gives a probabilistic guarantee that the smoothed classifier's predictions will not change within an $ell$-ball around a given input.
Uncertainty-based rejection is a technique often applied in practice to defend models against adversarial attacks.
We demonstrate, that the novel framework allows for a systematic evaluation of different network architectures and uncertainty measures.
arXiv Detail & Related papers (2024-10-27T13:07:43Z) - Certified Human Trajectory Prediction [66.1736456453465]
Tray prediction plays an essential role in autonomous vehicles.
We propose a certification approach tailored for the task of trajectory prediction.
We address the inherent challenges associated with trajectory prediction, including unbounded outputs, and mutli-modality.
arXiv Detail & Related papers (2024-03-20T17:41:35Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Dynamic ensemble selection based on Deep Neural Network Uncertainty
Estimation for Adversarial Robustness [7.158144011836533]
This work explore the dynamic attributes in model level through dynamic ensemble selection technology.
In training phase the Dirichlet distribution is apply as prior of sub-models' predictive distribution, and the diversity constraint in parameter space is introduced.
In test phase, the certain sub-models are dynamically selected based on their rank of uncertainty value for the final prediction.
arXiv Detail & Related papers (2023-08-01T07:41:41Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Uncertainty Surrogates for Deep Learning [17.868995105624023]
We introduce a novel way of estimating prediction uncertainty in deep networks through the use of uncertainty surrogates.
These surrogates are features of the penultimate layer of a deep network that are forced to match predefined patterns.
We show how our approach can be used for estimating uncertainty in prediction and out-of-distribution detection.
arXiv Detail & Related papers (2021-04-16T14:50:28Z) - The Benefit of the Doubt: Uncertainty Aware Sensing for Edge Computing
Platforms [10.86298377998459]
We propose an efficient framework for predictive uncertainty estimation in NNs deployed on embedded edge systems.
The framework is built from the ground up to provide predictive uncertainty based only on one forward pass.
Our approach not only obtains robust and accurate uncertainty estimations but also outperforms state-of-the-art methods in terms of systems performance.
arXiv Detail & Related papers (2021-02-11T11:44:32Z) - Towards Trustworthy Predictions from Deep Neural Networks with Fast
Adversarial Calibration [2.8935588665357077]
We propose an efficient yet general modelling approach for obtaining well-calibrated, trustworthy probabilities for samples obtained after a domain shift.
We introduce a new training strategy combining an entropy-encouraging loss term with an adversarial calibration loss term and demonstrate that this results in well-calibrated and technically trustworthy predictions.
arXiv Detail & Related papers (2020-12-20T13:39:29Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - A general framework for defining and optimizing robustness [74.67016173858497]
We propose a rigorous and flexible framework for defining different types of robustness properties for classifiers.
Our concept is based on postulates that robustness of a classifier should be considered as a property that is independent of accuracy.
We develop a very general robustness framework that is applicable to any type of classification model.
arXiv Detail & Related papers (2020-06-19T13:24:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.