URSABench: Comprehensive Benchmarking of Approximate Bayesian Inference
Methods for Deep Neural Networks
- URL: http://arxiv.org/abs/2007.04466v1
- Date: Wed, 8 Jul 2020 22:51:28 GMT
- Title: URSABench: Comprehensive Benchmarking of Approximate Bayesian Inference
Methods for Deep Neural Networks
- Authors: Meet P. Vadera, Adam D. Cobb, Brian Jalaian, Benjamin M. Marlin
- Abstract summary: Deep learning methods continue to improve in predictive accuracy on a wide range of application domains.
Recent advances in approximate Bayesian inference hold significant promise for addressing these concerns.
We describe initial work on the development ofURSABench, an open-source suite of bench-marking tools for comprehensive assessment of approximate Bayesian inference methods.
- Score: 15.521736934292354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep learning methods continue to improve in predictive accuracy on a
wide range of application domains, significant issues remain with other aspects
of their performance including their ability to quantify uncertainty and their
robustness. Recent advances in approximate Bayesian inference hold significant
promise for addressing these concerns, but the computational scalability of
these methods can be problematic when applied to large-scale models. In this
paper, we describe initial work on the development ofURSABench(the Uncertainty,
Robustness, Scalability, and Accu-racy Benchmark), an open-source suite of
bench-marking tools for comprehensive assessment of approximate Bayesian
inference methods with a focus on deep learning-based classification tasks
Related papers
- Bayesian Neural Scaling Law Extrapolation with Prior-Data Fitted Networks [100.13335639780415]
Scaling laws often follow the power-law and proposed several variants of power-law functions to predict the scaling behavior at larger scales.<n>Existing methods mostly rely on point estimation and do not quantify uncertainty, which is crucial for real-world applications.<n>In this work, we explore a Bayesian framework based on Prior-data Fitted Networks (PFNs) for neural scaling law extrapolation.
arXiv Detail & Related papers (2025-05-29T03:19:17Z) - Enhancing Classification with Semi-Supervised Deep Learning Using Distance-Based Sample Weights [0.0]
This work proposes a semi-supervised framework that prioritizes training samples based on their proximity to test data.<n> Experiments on twelve benchmark datasets demonstrate significant improvements across key metrics, including accuracy, precision, and recall.<n>This framework provides a robust and practical solution for semi-supervised learning, with potential applications in domains such as healthcare and security.
arXiv Detail & Related papers (2025-05-20T13:29:04Z) - In-Context Parametric Inference: Point or Distribution Estimators? [66.22308335324239]
We show that amortized point estimators generally outperform posterior inference, though the latter remain competitive in some low-dimensional problems.
Our experiments indicate that amortized point estimators generally outperform posterior inference, though the latter remain competitive in some low-dimensional problems.
arXiv Detail & Related papers (2025-02-17T10:00:24Z) - A Review of Bayesian Uncertainty Quantification in Deep Probabilistic Image Segmentation [0.0]
Advancements in image segmentation play an integral role within the greater scope of Deep Learning-based computer vision.
Uncertainty quantification has been extensively studied within this context, enabling expression of model ignorance (epistemic uncertainty) or data ambiguity (aleatoric uncertainty) to prevent uninformed decision making.
This work provides a comprehensive overview of probabilistic segmentation by discussing fundamental concepts in uncertainty that govern advancements in the field and the application to various tasks.
arXiv Detail & Related papers (2024-11-25T13:26:09Z) - Efficient Nearest Neighbor based Uncertainty Estimation for Natural Language Processing Tasks [26.336947440529713]
Trustworthiness in model predictions is crucial for safety-critical applications in the real world.
Deep neural networks often suffer from the issues of uncertainty estimation, such as miscalibration.
We propose $k$-Nearest Neighbor Uncertainty Estimation ($k$NN-UE), which uses not only the distances from the neighbors, but also the ratio of labels in the neighbors.
arXiv Detail & Related papers (2024-07-02T10:33:31Z) - Deep Learning-Based Object Pose Estimation: A Comprehensive Survey [73.74933379151419]
We discuss the recent advances in deep learning-based object pose estimation.
Our survey also covers multiple input data modalities, degrees-of-freedom of output poses, object properties, and downstream tasks.
arXiv Detail & Related papers (2024-05-13T14:44:22Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen
Neural Networks [50.15201777970128]
We propose BayesCap that learns a Bayesian identity mapping for the frozen model, allowing uncertainty estimation.
BayesCap is a memory-efficient method that can be trained on a small fraction of the original dataset.
We show the efficacy of our method on a wide variety of tasks with a diverse set of architectures.
arXiv Detail & Related papers (2022-07-14T12:50:09Z) - Quantitative performance evaluation of Bayesian neural networks [0.0]
Despite the growing litterature about uncertainty in deep learning, the quality of the uncertainty estimates remains an open question.
In this work, we attempt to assess the performance of several algorithms on sampling and regression tasks.
arXiv Detail & Related papers (2022-06-08T06:56:50Z) - (De-)Randomized Smoothing for Decision Stump Ensembles [5.161531917413708]
Tree-based models are used in many high-stakes application domains such as finance and medicine.
We propose deterministic smoothing for decision stump ensembles.
We obtain deterministic robustness certificates, even jointly over numerical and categorical features.
arXiv Detail & Related papers (2022-05-27T11:23:50Z) - Evaluating Predictive Distributions: Does Bayesian Deep Learning Work? [45.290773422944866]
Posterior predictive distributions quantify uncertainties ignored by point estimates.
This paper introduces textitThe Neural Testbed, which provides tools for the systematic evaluation of agents that generate such predictions.
arXiv Detail & Related papers (2021-10-09T18:54:02Z) - $\beta$-Cores: Robust Large-Scale Bayesian Data Summarization in the
Presence of Outliers [14.918826474979587]
The quality of classic Bayesian inference depends critically on whether observations conform with the assumed data generating model.
We propose a variational inference method that, in a principled way, can simultaneously scale to large datasets.
We illustrate the applicability of our approach in diverse simulated and real datasets, and various statistical models.
arXiv Detail & Related papers (2020-08-31T13:47:12Z) - Provable tradeoffs in adversarially robust classification [96.48180210364893]
We develop and leverage new tools, including recent breakthroughs from probability theory on robust isoperimetry.
Our results reveal fundamental tradeoffs between standard and robust accuracy that grow when data is imbalanced.
arXiv Detail & Related papers (2020-06-09T09:58:19Z) - Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep
Learning [70.72363097550483]
In this study, we focus on in-domain uncertainty for image classification.
To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE)
arXiv Detail & Related papers (2020-02-15T23:28:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.