Neural Importance Sampling for Rapid and Reliable Gravitational-Wave
Inference
- URL: http://arxiv.org/abs/2210.05686v2
- Date: Tue, 30 May 2023 13:21:07 GMT
- Title: Neural Importance Sampling for Rapid and Reliable Gravitational-Wave
Inference
- Authors: Maximilian Dax, Stephen R. Green, Jonathan Gair, Michael P\"urrer,
Jonas Wildberger, Jakob H. Macke, Alessandra Buonanno, Bernhard Sch\"olkopf
- Abstract summary: We first generate a rapid proposal for the Bayesian posterior using neural networks, and then attach importance weights based on the underlying likelihood and prior.
This provides (1) a corrected posterior free from network inaccuracies, (2) a performance diagnostic (the sample efficiency) for assessing the proposal and identifying failure cases, and (3) an unbiased estimate of the Bayesian evidence.
We carry out a large study analyzing 42 binary black hole mergers observed by LIGO and Virgo with the SEOBNRv4PHM and IMRPhenomHMXP waveform models.
- Score: 59.040209568168436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We combine amortized neural posterior estimation with importance sampling for
fast and accurate gravitational-wave inference. We first generate a rapid
proposal for the Bayesian posterior using neural networks, and then attach
importance weights based on the underlying likelihood and prior. This provides
(1) a corrected posterior free from network inaccuracies, (2) a performance
diagnostic (the sample efficiency) for assessing the proposal and identifying
failure cases, and (3) an unbiased estimate of the Bayesian evidence. By
establishing this independent verification and correction mechanism we address
some of the most frequent criticisms against deep learning for scientific
inference. We carry out a large study analyzing 42 binary black hole mergers
observed by LIGO and Virgo with the SEOBNRv4PHM and IMRPhenomXPHM waveform
models. This shows a median sample efficiency of $\approx 10\%$ (two
orders-of-magnitude better than standard samplers) as well as a ten-fold
reduction in the statistical uncertainty in the log evidence. Given these
advantages, we expect a significant impact on gravitational-wave inference, and
for this approach to serve as a paradigm for harnessing deep learning methods
in scientific applications.
Related papers
- Adversarial robustness of amortized Bayesian inference [3.308743964406687]
Amortized Bayesian inference is to initially invest computational cost in training an inference network on simulated data.
We show that almost unrecognizable, targeted perturbations of the observations can lead to drastic changes in the predicted posterior and highly unrealistic posterior predictive samples.
We propose a computationally efficient regularization scheme based on penalizing the Fisher information of the conditional density estimator.
arXiv Detail & Related papers (2023-05-24T10:18:45Z) - On the Theories Behind Hard Negative Sampling for Recommendation [51.64626293229085]
We offer two insightful guidelines for effective usage of Hard Negative Sampling (HNS)
We prove that employing HNS on the Personalized Ranking (BPR) learner is equivalent to optimizing One-way Partial AUC (OPAUC)
These analyses establish the theoretical foundation of HNS in optimizing Top-K recommendation performance for the first time.
arXiv Detail & Related papers (2023-02-07T13:57:03Z) - Do Bayesian Variational Autoencoders Know What They Don't Know? [0.6091702876917279]
The problem of detecting the Out-of-Distribution (OoD) inputs is paramount importance for Deep Neural Networks.
It has been previously shown that even Deep Generative Models that allow estimating the density of the inputs may not be reliable.
This paper investigates three approaches to inference: Markov chain Monte Carlo, Bayes gradient by Backpropagation and Weight Averaging-Gaussian.
arXiv Detail & Related papers (2022-12-29T11:48:01Z) - How Tempering Fixes Data Augmentation in Bayesian Neural Networks [22.188535244056016]
We show that tempering implicitly reduces the misspecification arising from modeling augmentations as i.i.d. data.
The temperature mimics the role of the effective sample size, reflecting the gain in information provided by the augmentations.
arXiv Detail & Related papers (2022-05-27T11:06:56Z) - Evaluating the Adversarial Robustness for Fourier Neural Operators [78.36413169647408]
Fourier Neural Operator (FNO) was the first to simulate turbulent flow with zero-shot super-resolution.
We generate adversarial examples for FNO based on norm-bounded data input perturbations.
Our results show that the model's robustness degrades rapidly with increasing perturbation levels.
arXiv Detail & Related papers (2022-04-08T19:19:42Z) - alpha-Deep Probabilistic Inference (alpha-DPI): efficient uncertainty
quantification from exoplanet astrometry to black hole feature extraction [7.5042943749402555]
Inference is crucial in modern astronomical research, where hidden astrophysical features are estimated from indirect and noisy measurements.
Traditional approaches for posterior estimation include sampling-based methods and variational inference.
We propose alpha-DPI, a deep learning framework that learns an approximate posterior using alpha-divergence variational inference paired with a generative neural network.
arXiv Detail & Related papers (2022-01-21T00:58:10Z) - Real-time gravitational-wave science with neural posterior estimation [64.67121167063696]
We demonstrate unprecedented accuracy for rapid gravitational-wave parameter estimation with deep learning.
We analyze eight gravitational-wave events from the first LIGO-Virgo Gravitational-Wave Transient Catalog.
We find very close quantitative agreement with standard inference codes, but with inference times reduced from O(day) to a minute per event.
arXiv Detail & Related papers (2021-06-23T18:00:05Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.