Uncertain Evidence in Probabilistic Models and Stochastic Simulators
- URL: http://arxiv.org/abs/2210.12236v1
- Date: Fri, 21 Oct 2022 20:32:59 GMT
- Title: Uncertain Evidence in Probabilistic Models and Stochastic Simulators
- Authors: Andreas Munk, Alexander Mead and Frank Wood
- Abstract summary: We consider the problem of performing Bayesian inference in probabilistic models where observations are accompanied by uncertainty, referred to as uncertain evidence'
We explore how to interpret uncertain evidence, and by extension the importance of proper interpretation as it pertains to inference about latent variables.
We devise concrete guidelines on how to account for uncertain evidence and we provide new insights, particularly regarding consistency.
- Score: 80.40110074847527
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of performing Bayesian inference in probabilistic
models where observations are accompanied by uncertainty, referred to as
`uncertain evidence'. In many real-world scenarios, such uncertainty stems from
measurement errors associated with observable quantities in probabilistic
models. We explore how to interpret uncertain evidence, and by extension the
importance of proper interpretation as it pertains to inference about latent
variables. We consider a recently-proposed method `stochastic evidence' as well
as revisit two older methods: Jeffrey's rule and virtual evidence. We devise
concrete guidelines on how to account for uncertain evidence and we provide new
insights, particularly regarding consistency. To showcase the impact of
different interpretations of the same uncertain evidence, we carry out
experiments in which we compare inference results associated with each
interpretation.
Related papers
- Detecting and Measuring Confounding Using Causal Mechanism Shifts [31.625339624279686]
Causal sufficiency is both unrealistic and empirically untestable.
Existing methods make strong parametric assumptions about the underlying causal generative process to guarantee the identifiability of confounding variables.
We propose a comprehensive approach for detecting and measuring confounding.
arXiv Detail & Related papers (2024-09-26T13:44:22Z) - Identifying Drivers of Predictive Aleatoric Uncertainty [2.5311562666866494]
We present a simple approach to explain predictive aleatoric uncertainties.
We estimate uncertainty as predictive variance by adapting a neural network with a Gaussian output distribution.
We quantify our findings with a nuanced benchmark analysis that includes real-world datasets.
arXiv Detail & Related papers (2023-12-12T13:28:53Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Flexible Visual Recognition by Evidential Modeling of Confusion and
Ignorance [25.675733490127964]
In real-world scenarios, typical visual recognition systems could fail under two major causes, i.e., the misclassification between known classes and the excusable misbehavior on unknown-class images.
To tackle these deficiencies, flexible visual recognition should dynamically predict multiple classes when they are unconfident between choices and reject making predictions when the input is entirely out of the training distribution.
In this paper, we propose to model these two sources of uncertainty explicitly with the theory of Subjective Logic.
arXiv Detail & Related papers (2023-09-14T03:16:05Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Claim-Dissector: An Interpretable Fact-Checking System with Joint
Re-ranking and Veracity Prediction [4.082750656756811]
We present Claim-Dissector: a novel latent variable model for fact-checking and analysis.
We disentangle the per-evidence relevance probability and its contribution to the final veracity probability in an interpretable way.
Despite its interpretable nature, our system results competitive with state-of-the-art on the FEVER dataset.
arXiv Detail & Related papers (2022-07-28T14:30:06Z) - Excess risk analysis for epistemic uncertainty with application to
variational inference [110.4676591819618]
We present a novel EU analysis in the frequentist setting, where data is generated from an unknown distribution.
We show a relation between the generalization ability and the widely used EU measurements, such as the variance and entropy of the predictive distribution.
We propose new variational inference that directly controls the prediction and EU evaluation performances based on the PAC-Bayesian theory.
arXiv Detail & Related papers (2022-06-02T12:12:24Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Getting a CLUE: A Method for Explaining Uncertainty Estimates [30.367995696223726]
We propose a novel method for interpreting uncertainty estimates from differentiable probabilistic models.
Our method, Counterfactual Latent Uncertainty Explanations (CLUE), indicates how to change an input, while keeping it on the data manifold.
arXiv Detail & Related papers (2020-06-11T21:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.