Group equivariant neural posterior estimation
- URL: http://arxiv.org/abs/2111.13139v2
- Date: Tue, 30 May 2023 13:52:04 GMT
- Title: Group equivariant neural posterior estimation
- Authors: Maximilian Dax, Stephen R. Green, Jonathan Gair, Michael Deistler,
Bernhard Sch\"olkopf, Jakob H. Macke
- Abstract summary: Group equivariant neural posterior estimation (GNPE) is based on self-consistently standardizing the "pose" of the data.
We show GNPE achieves state-of-the-art accuracy while reducing inference times by three orders of magnitude.
- Score: 9.80649677905172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulation-based inference with conditional neural density estimators is a
powerful approach to solving inverse problems in science. However, these
methods typically treat the underlying forward model as a black box, with no
way to exploit geometric properties such as equivariances. Equivariances are
common in scientific models, however integrating them directly into expressive
inference networks (such as normalizing flows) is not straightforward. We here
describe an alternative method to incorporate equivariances under joint
transformations of parameters and data. Our method -- called group equivariant
neural posterior estimation (GNPE) -- is based on self-consistently
standardizing the "pose" of the data while estimating the posterior over
parameters. It is architecture-independent, and applies both to exact and
approximate equivariances. As a real-world application, we use GNPE for
amortized inference of astrophysical binary black hole systems from
gravitational-wave observations. We show that GNPE achieves state-of-the-art
accuracy while reducing inference times by three orders of magnitude.
Related papers
- Approximately Equivariant Neural Processes [47.14384085714576]
We consider the use of approximately equivariant architectures in neural processes.
We demonstrate the effectiveness of our approach on a number of synthetic and real-world regression experiments.
arXiv Detail & Related papers (2024-06-19T12:17:14Z) - Learning to solve Bayesian inverse problems: An amortized variational inference approach using Gaussian and Flow guides [0.0]
We develop a methodology that enables real-time inference by learning the Bayesian inverse map, i.e., the map from data to posteriors.
Our approach provides the posterior distribution for a given observation just at the cost of a forward pass of the neural network.
arXiv Detail & Related papers (2023-05-31T16:25:07Z) - Equivariance Discovery by Learned Parameter-Sharing [153.41877129746223]
We study how to discover interpretable equivariances from data.
Specifically, we formulate this discovery process as an optimization problem over a model's parameter-sharing schemes.
Also, we theoretically analyze the method for Gaussian data and provide a bound on the mean squared gap between the studied discovery scheme and the oracle scheme.
arXiv Detail & Related papers (2022-04-07T17:59:19Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Learning Invariances in Neural Networks [51.20867785006147]
We show how to parameterize a distribution over augmentations and optimize the training loss simultaneously with respect to the network parameters and augmentation parameters.
We can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations.
arXiv Detail & Related papers (2020-10-22T17:18:48Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Sparse Gaussian Processes with Spherical Harmonic Features [14.72311048788194]
We introduce a new class of inter-domain variational Gaussian processes (GP)
Our inference scheme is comparable to variational Fourier features, but it does not suffer from the curse of dimensionality.
Our experiments show that our model is able to fit a regression model for a dataset with 6 million entries two orders of magnitude faster.
arXiv Detail & Related papers (2020-06-30T10:19:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.