L-C2ST: Local Diagnostics for Posterior Approximations in
Simulation-Based Inference
- URL: http://arxiv.org/abs/2306.03580v2
- Date: Mon, 9 Oct 2023 21:57:48 GMT
- Title: L-C2ST: Local Diagnostics for Posterior Approximations in
Simulation-Based Inference
- Authors: Julia Linhart, Alexandre Gramfort, Pedro L. C. Rodrigues
- Abstract summary: L-C2ST allows for a local evaluation of the posterior estimator at any given observation.
It offers theoretically grounded and easy to interpret.
On standard SBI benchmarks, L-C2ST provides comparable results to C2ST and outperforms alternative local approaches.
- Score: 63.22081662149488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many recent works in simulation-based inference (SBI) rely on deep generative
models to approximate complex, high-dimensional posterior distributions.
However, evaluating whether or not these approximations can be trusted remains
a challenge. Most approaches evaluate the posterior estimator only in
expectation over the observation space. This limits their interpretability and
is not sufficient to identify for which observations the approximation can be
trusted or should be improved. Building upon the well-known classifier
two-sample test (C2ST), we introduce L-C2ST, a new method that allows for a
local evaluation of the posterior estimator at any given observation. It offers
theoretically grounded and easy to interpret -- e.g. graphical -- diagnostics,
and unlike C2ST, does not require access to samples from the true posterior. In
the case of normalizing flow-based posterior estimators, L-C2ST can be
specialized to offer better statistical power, while being computationally more
efficient. On standard SBI benchmarks, L-C2ST provides comparable results to
C2ST and outperforms alternative local approaches such as coverage tests based
on highest predictive density (HPD). We further highlight the importance of
local evaluation and the benefit of interpretability of L-C2ST on a challenging
application from computational neuroscience.
Related papers
- Quantifying Emergence in Large Language Models [31.608080868988825]
We propose a quantifiable solution for estimating emergence of LLMs.
Inspired by emergentism in dynamics, we quantify the strength of emergence by comparing the entropy reduction of the macroscopic (semantic) level with that of the microscopic (token) level.
Our method demonstrates consistent behaviors across a suite of LMs under both in-context learning (ICL) and natural sentences.
arXiv Detail & Related papers (2024-05-21T09:12:20Z) - Preconditioned Neural Posterior Estimation for Likelihood-free Inference [5.651060979874024]
We show in this paper that the neural posterior estimator (NPE) methods are not guaranteed to be highly accurate, even on problems with low dimension.
We propose preconditioned NPE and its sequential version (PSNPE), which uses a short run of ABC to effectively eliminate regions of parameter space that produce large discrepancy between simulations and data.
We present comprehensive empirical evidence that this melding of neural and statistical SBI methods improves performance over a range of examples.
arXiv Detail & Related papers (2024-04-21T07:05:38Z) - Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Online Bootstrap Inference with Nonconvex Stochastic Gradient Descent
Estimator [0.0]
In this paper, we investigate the theoretical properties of gradient descent (SGD) for statistical inference in the context of convex problems.
We propose two coferential procedures which may contain multiple error minima.
arXiv Detail & Related papers (2023-06-03T22:08:10Z) - Neural Posterior Estimation with Differentiable Simulators [58.720142291102135]
We present a new method to perform Neural Posterior Estimation (NPE) with a differentiable simulator.
We demonstrate how gradient information helps constrain the shape of the posterior and improves sample-efficiency.
arXiv Detail & Related papers (2022-07-12T16:08:04Z) - Training Discrete Deep Generative Models via Gapped Straight-Through
Estimator [72.71398034617607]
We propose a Gapped Straight-Through ( GST) estimator to reduce the variance without incurring resampling overhead.
This estimator is inspired by the essential properties of Straight-Through Gumbel-Softmax.
Experiments demonstrate that the proposed GST estimator enjoys better performance compared to strong baselines on two discrete deep generative modeling tasks.
arXiv Detail & Related papers (2022-06-15T01:46:05Z) - Machine Learning-Based Estimation and Goodness-of-Fit for Large-Scale
Confirmatory Item Factor Analysis [0.0]
We investigate novel parameter estimation and goodness-of-fit (GOF) assessment methods for large-scale item factor analysis (IFA)
For parameter estimation, we extend Urban and Bauer's (2021) deep learning algorithm for exploratory IFA to the confirmatory setting.
For GOF assessment, we explore new simulation-based tests and indices.
arXiv Detail & Related papers (2021-09-20T12:53:01Z) - Counterfactual Maximum Likelihood Estimation for Training Deep Networks [83.44219640437657]
Deep learning models are prone to learning spurious correlations that should not be learned as predictive clues.
We propose a causality-based training framework to reduce the spurious correlations caused by observable confounders.
We conduct experiments on two real-world tasks: Natural Language Inference (NLI) and Image Captioning.
arXiv Detail & Related papers (2021-06-07T17:47:16Z) - Computational Efficient Approximations of the Concordance Probability in
a Big Data Setting [0.0]
We propose two estimation methods that calculate the concordance probability in a fast and accurate way.
Experiments on two real-life data sets confirm the conclusions of the artificial simulations.
arXiv Detail & Related papers (2021-05-21T15:09:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.