Handling Epistemic and Aleatory Uncertainties in Probabilistic Circuits
- URL: http://arxiv.org/abs/2102.10865v1
- Date: Mon, 22 Feb 2021 10:03:15 GMT
- Title: Handling Epistemic and Aleatory Uncertainties in Probabilistic Circuits
- Authors: Federico Cerutti, Lance M. Kaplan, Angelika Kimmig, Murat Sensoy
- Abstract summary: We propose an approach to overcome the independence assumption behind most of the approaches dealing with a large class of probabilistic reasoning.
We provide an algorithm for Bayesian learning from sparse, albeit complete, observations.
Each leaf of such circuits is labelled with a beta-distributed random variable that provides us with an elegant framework for representing uncertain probabilities.
- Score: 18.740781076082044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When collaborating with an AI system, we need to assess when to trust its
recommendations. If we mistakenly trust it in regions where it is likely to
err, catastrophic failures may occur, hence the need for Bayesian approaches
for probabilistic reasoning in order to determine the confidence (or epistemic
uncertainty) in the probabilities in light of the training data. We propose an
approach to overcome the independence assumption behind most of the approaches
dealing with a large class of probabilistic reasoning that includes Bayesian
networks as well as several instances of probabilistic logic. We provide an
algorithm for Bayesian learning from sparse, albeit complete, observations, and
for deriving inferences and their confidences keeping track of the dependencies
between variables when they are manipulated within the unifying computational
formalism provided by probabilistic circuits. Each leaf of such circuits is
labelled with a beta-distributed random variable that provides us with an
elegant framework for representing uncertain probabilities. We achieve better
estimation of epistemic uncertainty than state-of-the-art approaches, including
highly engineered ones, while being able to handle general circuits and with
just a modest increase in the computational effort compared to using point
probabilities.
Related papers
- Bayesian meta learning for trustworthy uncertainty quantification [3.683202928838613]
We propose, Trust-Bayes, a novel optimization framework for Bayesian meta learning.
We characterize the lower bounds of the probabilities of the ground truth being captured by the specified intervals.
We analyze the sample complexity with respect to the feasible probability for trustworthy uncertainty quantification.
arXiv Detail & Related papers (2024-07-27T15:56:12Z) - BIRD: A Trustworthy Bayesian Inference Framework for Large Language Models [52.46248487458641]
Predictive models often need to work with incomplete information in real-world tasks.
Current large language models (LLM) are insufficient for such accurate estimations.
We propose BIRD, a novel probabilistic inference framework.
arXiv Detail & Related papers (2024-04-18T20:17:23Z) - Distributionally Robust Skeleton Learning of Discrete Bayesian Networks [9.46389554092506]
We consider the problem of learning the exact skeleton of general discrete Bayesian networks from potentially corrupted data.
We propose to optimize the most adverse risk over a family of distributions within bounded Wasserstein distance or KL divergence to the empirical distribution.
We present efficient algorithms and show the proposed methods are closely related to the standard regularized regression approach.
arXiv Detail & Related papers (2023-11-10T15:33:19Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Understanding Approximation for Bayesian Inference in Neural Networks [7.081604594416339]
I explore approximate inference in Bayesian neural networks.
The expected utility of the approximate posterior can measure inference quality.
Continual and active learning set-ups pose challenges that have nothing to do with posterior quality.
arXiv Detail & Related papers (2022-11-11T11:31:13Z) - Deep Probability Estimation [14.659180336823354]
We investigate probability estimation from high-dimensional data using deep neural networks.
The goal of this work is to investigate probability estimation from high-dimensional data using deep neural networks.
We evaluate existing methods on the synthetic data as well as on three real-world probability estimation tasks.
arXiv Detail & Related papers (2021-11-21T03:55:50Z) - A Gentle Introduction to Conformal Prediction and Distribution-Free
Uncertainty Quantification [1.90365714903665]
This hands-on introduction is aimed at a reader interested in the practical implementation of distribution-free UQ.
We will include many explanatory illustrations, examples, and code samples in Python, with PyTorch syntax.
arXiv Detail & Related papers (2021-07-15T17:59:50Z) - Tractable Inference in Credal Sentential Decision Diagrams [116.6516175350871]
Probabilistic sentential decision diagrams are logic circuits where the inputs of disjunctive gates are annotated by probability values.
We develop the credal sentential decision diagrams, a generalisation of their probabilistic counterpart that allows for replacing the local probabilities with credal sets of mass functions.
For a first empirical validation, we consider a simple application based on noisy seven-segment display images.
arXiv Detail & Related papers (2020-08-19T16:04:34Z) - Evaluating probabilistic classifiers: Reliability diagrams and score
decompositions revisited [68.8204255655161]
We introduce the CORP approach, which generates provably statistically Consistent, Optimally binned, and Reproducible reliability diagrams in an automated way.
Corpor is based on non-parametric isotonic regression and implemented via the Pool-adjacent-violators (PAV) algorithm.
arXiv Detail & Related papers (2020-08-07T08:22:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.