Uncertainty Disentanglement with Non-stationary Heteroscedastic Gaussian
Processes for Active Learning
- URL: http://arxiv.org/abs/2210.10964v1
- Date: Thu, 20 Oct 2022 02:18:19 GMT
- Title: Uncertainty Disentanglement with Non-stationary Heteroscedastic Gaussian
Processes for Active Learning
- Authors: Zeel B Patel, Nipun Batra, Kevin Murphy
- Abstract summary: We propose a Non-stationary Heteroscedastic Gaussian process model which can be learned with gradient-based techniques.
We demonstrate the interpretability of the proposed model by separating the overall uncertainty into aleatoric (irreducible) and epistemic (model) uncertainty.
- Score: 10.757942829334057
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gaussian processes are Bayesian non-parametric models used in many areas. In
this work, we propose a Non-stationary Heteroscedastic Gaussian process model
which can be learned with gradient-based techniques. We demonstrate the
interpretability of the proposed model by separating the overall uncertainty
into aleatoric (irreducible) and epistemic (model) uncertainty. We illustrate
the usability of derived epistemic uncertainty on active learning problems. We
demonstrate the efficacy of our model with various ablations on multiple
datasets.
Related papers
- Learning Latent Space Dynamics with Model-Form Uncertainties: A Stochastic Reduced-Order Modeling Approach [0.0]
This paper presents a probabilistic approach to represent and quantify model-form uncertainties in the reduced-order modeling of complex systems.
The proposed method captures these uncertainties by expanding the approximation space through the randomization of the projection matrix.
The efficacy of the approach is assessed on canonical problems in fluid mechanics by identifying and quantifying the impact of model-form uncertainties on the inferred operators.
arXiv Detail & Related papers (2024-08-30T19:25:28Z) - DiffHybrid-UQ: Uncertainty Quantification for Differentiable Hybrid
Neural Modeling [4.76185521514135]
We introduce a novel method, DiffHybrid-UQ, for effective and efficient uncertainty propagation and estimation in hybrid neural differentiable models.
Specifically, our approach effectively discerns and quantifies both aleatoric uncertainties, arising from data noise, and epistemic uncertainties, resulting from model-form discrepancies and data sparsity.
arXiv Detail & Related papers (2023-12-30T07:40:47Z) - Assessing the overall and partial causal well-specification of nonlinear additive noise models [4.13592995550836]
We aim to identify predictor variables for which we can infer the causal effect even in cases of such misspecifications.
We propose an algorithm for finite sample data, discuss its properties, and illustrate its performance on simulated and real data.
arXiv Detail & Related papers (2023-10-25T09:44:16Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Evaluating Sensitivity to the Stick-Breaking Prior in Bayesian
Nonparametrics [85.31247588089686]
We show that variational Bayesian methods can yield sensitivities with respect to parametric and nonparametric aspects of Bayesian models.
We provide both theoretical and empirical support for our variational approach to Bayesian sensitivity analysis.
arXiv Detail & Related papers (2021-07-08T03:40:18Z) - Compositional Modeling of Nonlinear Dynamical Systems with ODE-based
Random Features [0.0]
We present a novel, domain-agnostic approach to tackling this problem.
We use compositions of physics-informed random features, derived from ordinary differential equations.
We find that our approach achieves comparable performance to a number of other probabilistic models on benchmark regression tasks.
arXiv Detail & Related papers (2021-06-10T17:55:13Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Nonlinear Independent Component Analysis for Continuous-Time Signals [85.59763606620938]
We study the classical problem of recovering a multidimensional source process from observations of mixtures of this process.
We show that this recovery is possible for many popular models of processes (up to order and monotone scaling of their coordinates) if the mixture is given by a sufficiently differentiable, invertible function.
arXiv Detail & Related papers (2021-02-04T20:28:44Z) - Bayesian differential programming for robust systems identification
under uncertainty [14.169588600819546]
This paper presents a machine learning framework for Bayesian systems identification from noisy, sparse and irregular observations of nonlinear dynamical systems.
The proposed method takes advantage of recent developments in differentiable programming to propagate gradient information through ordinary differential equation solvers.
The use of sparsity-promoting priors enables the discovery of interpretable and parsimonious representations for the underlying latent dynamics.
arXiv Detail & Related papers (2020-04-15T00:51:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.