Fast and Credible Likelihood-Free Cosmology with Truncated Marginal
Neural Ratio Estimation
- URL: http://arxiv.org/abs/2111.08030v1
- Date: Mon, 15 Nov 2021 19:00:09 GMT
- Title: Fast and Credible Likelihood-Free Cosmology with Truncated Marginal
Neural Ratio Estimation
- Authors: Alex Cole, Benjamin Kurt Miller, Samuel J. Witte, Maxwell X. Cai,
Meiert W. Grootes, Francesco Nattino, Christoph Weniger
- Abstract summary: Truncated Marginal Neural Ratio Estimation (TMNRE) is a new approach in so-called simulation-based inference.
We show that TMNRE can achieve converged posteriors using orders of magnitude fewer simulator calls than conventional Markov Chain Monte Carlo.
TMNRE promises to become a powerful tool for cosmological data analysis, particularly in the context of extended cosmologies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sampling-based inference techniques are central to modern cosmological data
analysis; these methods, however, scale poorly with dimensionality and
typically require approximate or intractable likelihoods. In this paper we
describe how Truncated Marginal Neural Ratio Estimation (TMNRE) (a new approach
in so-called simulation-based inference) naturally evades these issues,
improving the $(i)$ efficiency, $(ii)$ scalability, and $(iii)$ trustworthiness
of the inferred posteriors. Using measurements of the Cosmic Microwave
Background (CMB), we show that TMNRE can achieve converged posteriors using
orders of magnitude fewer simulator calls than conventional Markov Chain Monte
Carlo (MCMC) methods. Remarkably, the required number of samples is effectively
independent of the number of nuisance parameters. In addition, a property
called \emph{local amortization} allows the performance of rigorous statistical
consistency checks that are not accessible to sampling-based methods. TMNRE
promises to become a powerful tool for cosmological data analysis, particularly
in the context of extended cosmologies, where the timescale required for
conventional sampling-based inference methods to converge can greatly exceed
that of simple cosmological models such as $\Lambda$CDM. To perform these
computations, we use an implementation of TMNRE via the open-source code
\texttt{swyft}.
Related papers
- Mean-Field Simulation-Based Inference for Cosmological Initial Conditions [4.520518890664213]
We present a simple method for Bayesian field reconstruction based on modeling the posterior distribution of the initial matter density field to be diagonal Gaussian in Fourier space.
Training and sampling are extremely fast (training: $sim 1, mathrmh$ on a GPU, sampling: $lesssim 3, mathrms$ for 1000 samples at resolution $1283$), and our method supports industry-standard (non-differentiable) $N$-body simulators.
arXiv Detail & Related papers (2024-10-21T09:23:50Z) - Robust Barycenter Estimation using Semi-Unbalanced Neural Optimal Transport [84.51977664336056]
We propose a novel, scalable approach for estimating the textitrobust continuous barycenter.
Our method is framed as a $min$-$max$ optimization problem and is adaptable to textitgeneral cost function.
arXiv Detail & Related papers (2024-10-04T23:27:33Z) - A sparse PAC-Bayesian approach for high-dimensional quantile prediction [0.0]
This paper presents a novel probabilistic machine learning approach for high-dimensional quantile prediction.
It uses a pseudo-Bayesian framework with a scaled Student-t prior and Langevin Monte Carlo for efficient computation.
Its effectiveness is validated through simulations and real-world data, where it performs competitively against established frequentist and Bayesian techniques.
arXiv Detail & Related papers (2024-09-03T08:01:01Z) - A Specialized Semismooth Newton Method for Kernel-Based Optimal
Transport [92.96250725599958]
Kernel-based optimal transport (OT) estimators offer an alternative, functional estimation procedure to address OT problems from samples.
We show that our SSN method achieves a global convergence rate of $O (1/sqrtk)$, and a local quadratic convergence rate under standard regularity conditions.
arXiv Detail & Related papers (2023-10-21T18:48:45Z) - Simulation-based inference using surjective sequential neural likelihood
estimation [50.24983453990065]
Surjective Sequential Neural Likelihood estimation is a novel method for simulation-based inference.
By embedding the data in a low-dimensional space, SSNL solves several issues previous likelihood-based methods had when applied to high-dimensional data sets.
arXiv Detail & Related papers (2023-08-02T10:02:38Z) - Aspects of scaling and scalability for flow-based sampling of lattice
QCD [137.23107300589385]
Recent applications of machine-learned normalizing flows to sampling in lattice field theory suggest that such methods may be able to mitigate critical slowing down and topological freezing.
It remains to be determined whether they can be applied to state-of-the-art lattice quantum chromodynamics calculations.
arXiv Detail & Related papers (2022-11-14T17:07:37Z) - Probabilistic Mass Mapping with Neural Score Estimation [4.079848600120986]
We introduce a novel methodology for efficient sampling of the high-dimensional Bayesian posterior of the weak lensing mass-mapping problem.
We aim to demonstrate the accuracy of the method on simulations, and then proceed to applying it to the mass reconstruction of the HST/ACS COSMOS field.
arXiv Detail & Related papers (2022-01-14T17:07:48Z) - Gaining Outlier Resistance with Progressive Quantiles: Fast Algorithms
and Theoretical Studies [1.6457778420360534]
A framework of outlier-resistant estimation is introduced to robustify arbitrarily loss function.
A new technique is proposed to alleviate the requirement on starting point such that on regular datasets the number of data reestimations can be substantially reduced.
The obtained estimators, though not necessarily globally or even globally, enjoymax optimality in both low dimensions.
arXiv Detail & Related papers (2021-12-15T20:35:21Z) - Sinkhorn Natural Gradient for Generative Models [125.89871274202439]
We propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence.
We show that the Sinkhorn information matrix (SIM), a key component of SiNG, has an explicit expression and can be evaluated accurately in complexity that scales logarithmically.
In our experiments, we quantitatively compare SiNG with state-of-the-art SGD-type solvers on generative tasks to demonstrate its efficiency and efficacy of our method.
arXiv Detail & Related papers (2020-11-09T02:51:17Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.