Blurs Make Results Clearer: Spatial Smoothings to Improve Accuracy,
Uncertainty, and Robustness
- URL: http://arxiv.org/abs/2105.12639v1
- Date: Wed, 26 May 2021 15:58:11 GMT
- Title: Blurs Make Results Clearer: Spatial Smoothings to Improve Accuracy,
Uncertainty, and Robustness
- Authors: Namuk Park, Songkuk Kim
- Abstract summary: Bayesian neural networks (BNNs) have shown success in the areas of uncertainty estimation and robustness.
We propose spatial smoothing, a method that ensembles neighboring feature map points of CNNs.
By simply adding a few blur layers to the models, we empirically show that the spatial smoothing improves accuracy, uncertainty estimation, and robustness of BNNs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian neural networks (BNNs) have shown success in the areas of
uncertainty estimation and robustness. However, a crucial challenge prohibits
their use in practice: Bayesian NNs require a large number of predictions to
produce reliable results, leading to a significant increase in computational
cost. To alleviate this issue, we propose spatial smoothing, a method that
ensembles neighboring feature map points of CNNs. By simply adding a few blur
layers to the models, we empirically show that the spatial smoothing improves
accuracy, uncertainty estimation, and robustness of BNNs across a whole range
of ensemble sizes. In particular, BNNs incorporating the spatial smoothing
achieve high predictive performance merely with a handful of ensembles.
Moreover, this method also can be applied to canonical deterministic neural
networks to improve the performances. A number of evidences suggest that the
improvements can be attributed to the smoothing and flattening of the loss
landscape. In addition, we provide a fundamental explanation for prior works -
namely, global average pooling, pre-activation, and ReLU6 - by addressing to
them as special cases of the spatial smoothing. These not only enhance
accuracy, but also improve uncertainty estimation and robustness by making the
loss landscape smoother in the same manner as the spatial smoothing. The code
is available at https://github.com/xxxnell/spatial-smoothing.
Related papers
- Quantification of Uncertainties in Probabilistic Deep Neural Network by Implementing Boosting of Variational Inference [0.38366697175402226]
Boosted Bayesian Neural Networks (BBNN) is a novel approach that enhances neural network weight distribution approximations.
BBNN achieves 5% higher accuracy compared to conventional neural networks.
arXiv Detail & Related papers (2025-03-18T05:11:21Z) - Explainable Bayesian deep learning through input-skip Latent Binary Bayesian Neural Networks [11.815986153374967]
This article advances LBBNNs by enabling covariates to skip to any succeeding layer or be excluded.
The input-skip LBBNN approach reduces network density significantly compared to standard LBBNNs, achieving over 99% reduction for small networks and over 99.9% for larger ones.
For example, on MNIST, we reached 97% accuracy and great calibration with just 935 weights, reaching state-of-the-art for compression of neural networks.
arXiv Detail & Related papers (2025-03-13T15:59:03Z) - ZOBNN: Zero-Overhead Dependable Design of Binary Neural Networks with Deliberately Quantized Parameters [0.0]
In this paper, we introduce a third advantage of very low-precision neural networks: improved fault-tolerance.
We investigate the impact of memory faults on state-of-the-art binary neural networks (BNNs) through comprehensive analysis.
We propose a technique to improve BNN dependability by restricting the range of float parameters through a novel deliberately uniform quantization.
arXiv Detail & Related papers (2024-07-06T05:31:11Z) - Achieving Constraints in Neural Networks: A Stochastic Augmented
Lagrangian Approach [49.1574468325115]
Regularizing Deep Neural Networks (DNNs) is essential for improving generalizability and preventing overfitting.
We propose a novel approach to DNN regularization by framing the training process as a constrained optimization problem.
We employ the Augmented Lagrangian (SAL) method to achieve a more flexible and efficient regularization mechanism.
arXiv Detail & Related papers (2023-10-25T13:55:35Z) - UPNet: Uncertainty-based Picking Deep Learning Network for Robust First Break Picking [6.380128763476294]
First break (FB) picking is a crucial aspect in the determination of subsurface velocity models.
Deep neural networks (DNNs) have been proposed to accelerate this processing.
We introduce uncertainty quantification into the FB picking task and propose a novel uncertainty-based deep learning network called UPNet.
arXiv Detail & Related papers (2023-05-23T08:13:09Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - A Simple Approach to Improve Single-Model Deep Uncertainty via
Distance-Awareness [33.09831377640498]
We study approaches to improve uncertainty property of a single network, based on a single, deterministic representation.
We propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs.
On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection.
arXiv Detail & Related papers (2022-05-01T05:46:13Z) - Rethinking Feature Uncertainty in Stochastic Neural Networks for
Adversarial Robustness [12.330036598899218]
A randomness technique has been proposed recently, named Neural Networks (SNNs)
MFDV-SNN achieves a significant improvement over existing methods, which indicates that it is a simple but effective method to improve model robustness.
arXiv Detail & Related papers (2022-01-01T08:46:06Z) - DEBOSH: Deep Bayesian Shape Optimization [48.80431740983095]
We propose a novel uncertainty-based method tailored to shape optimization.
It enables effective BO and increases the quality of the resulting shapes beyond that of state-of-the-art approaches.
arXiv Detail & Related papers (2021-09-28T11:01:42Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.