Target Detection on Hyperspectral Images Using MCMC and VI Trained
Bayesian Neural Networks
- URL: http://arxiv.org/abs/2308.06293v1
- Date: Fri, 11 Aug 2023 01:35:54 GMT
- Title: Target Detection on Hyperspectral Images Using MCMC and VI Trained
Bayesian Neural Networks
- Authors: Daniel Ries, Jason Adams, Joshua Zollweg
- Abstract summary: Bayesian neural networks (BNN) provide uncertainty quantification (UQ) for NN predictions and estimates.
We apply and compare MCMC- and VI-trained BNN in the context of target detection in hyperspectral imagery (HSI)
Both models are trained using out-of-the-box tools on a high fidelity HSI target detection scene.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks (NN) have become almost ubiquitous with image classification,
but in their standard form produce point estimates, with no measure of
confidence. Bayesian neural networks (BNN) provide uncertainty quantification
(UQ) for NN predictions and estimates through the posterior distribution. As NN
are applied in more high-consequence applications, UQ is becoming a
requirement. BNN provide a solution to this problem by not only giving accurate
predictions and estimates, but also an interval that includes reasonable values
within a desired probability. Despite their positive attributes, BNN are
notoriously difficult and time consuming to train. Traditional Bayesian methods
use Markov Chain Monte Carlo (MCMC), but this is often brushed aside as being
too slow. The most common method is variational inference (VI) due to its fast
computation, but there are multiple concerns with its efficacy. We apply and
compare MCMC- and VI-trained BNN in the context of target detection in
hyperspectral imagery (HSI), where materials of interest can be identified by
their unique spectral signature. This is a challenging field, due to the
numerous permuting effects practical collection of HSI has on measured spectra.
Both models are trained using out-of-the-box tools on a high fidelity HSI
target detection scene. Both MCMC- and VI-trained BNN perform well overall at
target detection on a simulated HSI scene. This paper provides an example of
how to utilize the benefits of UQ, but also to increase awareness that
different training methods can give different results for the same model. If
sufficient computational resources are available, the best approach rather than
the fastest or most efficient should be used, especially for high consequence
problems.
Related papers
- Single-shot Bayesian approximation for neural networks [0.0]
Deep neural networks (NNs) are known for their high-prediction performances.
NNs are prone to yield unreliable predictions when encountering completely new situations without indicating their uncertainty.
We present a single-shot MC dropout approximation that preserves the advantages of BNNs while being as fast as NNs.
arXiv Detail & Related papers (2023-08-24T13:40:36Z) - Random-Set Neural Networks (RS-NN) [4.549947259731147]
We propose a novel Random-Set Neural Network (RS-NN) for classification.
RS-NN predicts belief functions rather than probability vectors over a set of classes.
It encodes the 'epistemic' uncertainty induced in machine learning by limited training sets.
arXiv Detail & Related papers (2023-07-11T20:00:35Z) - Masked Bayesian Neural Networks : Theoretical Guarantee and its
Posterior Inference [1.2722697496405464]
We propose a new node-sparse BNN model which has good theoretical properties and is computationally feasible.
We prove that the posterior concentration rate to the true model is near minimax optimal and adaptive to the smoothness of the true model.
In addition, we develop a novel MCMC algorithm which makes the Bayesian inference of the node-sparse BNN model feasible in practice.
arXiv Detail & Related papers (2023-05-24T06:16:11Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - A Simple Approach to Improve Single-Model Deep Uncertainty via
Distance-Awareness [33.09831377640498]
We study approaches to improve uncertainty property of a single network, based on a single, deterministic representation.
We propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs.
On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection.
arXiv Detail & Related papers (2022-05-01T05:46:13Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Rethinking Nearest Neighbors for Visual Classification [56.00783095670361]
k-NN is a lazy learning method that aggregates the distance between the test image and top-k neighbors in a training set.
We adopt k-NN with pre-trained visual representations produced by either supervised or self-supervised methods in two steps.
Via extensive experiments on a wide range of classification tasks, our study reveals the generality and flexibility of k-NN integration.
arXiv Detail & Related papers (2021-12-15T20:15:01Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Encoding the latent posterior of Bayesian Neural Networks for
uncertainty quantification [10.727102755903616]
We aim for efficient deep BNNs amenable to complex computer vision architectures.
We achieve this by leveraging variational autoencoders (VAEs) to learn the interaction and the latent distribution of the parameters at each network layer.
Our approach, Latent-Posterior BNN (LP-BNN), is compatible with the recent BatchEnsemble method, leading to highly efficient (in terms of computation and memory during both training and testing) ensembles.
arXiv Detail & Related papers (2020-12-04T19:50:09Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.