Preconditioned training of normalizing flows for variational inference
in inverse problems
- URL: http://arxiv.org/abs/2101.03709v1
- Date: Mon, 11 Jan 2021 05:35:36 GMT
- Title: Preconditioned training of normalizing flows for variational inference
in inverse problems
- Authors: Ali Siahkoohi and Gabrio Rizzuti and Mathias Louboutin and Philipp A.
Witte and Felix J. Herrmann
- Abstract summary: We propose a conditional normalizing flow (NF) capable of sampling from a low-fidelity posterior distribution directly.
This conditional NF is used to speed up the training of the high-fidelity objective involving minimization of the Kullback-Leibler divergence.
Our numerical experiments, including a 2D toy and a seismic compressed sensing example, demonstrate that thanks to the preconditioning considerable speed-ups are achievable.
- Score: 1.5749416770494706
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Obtaining samples from the posterior distribution of inverse problems with
expensive forward operators is challenging especially when the unknowns involve
the strongly heterogeneous Earth. To meet these challenges, we propose a
preconditioning scheme involving a conditional normalizing flow (NF) capable of
sampling from a low-fidelity posterior distribution directly. This conditional
NF is used to speed up the training of the high-fidelity objective involving
minimization of the Kullback-Leibler divergence between the predicted and the
desired high-fidelity posterior density for indirect measurements at hand. To
minimize costs associated with the forward operator, we initialize the
high-fidelity NF with the weights of the pretrained low-fidelity NF, which is
trained beforehand on available model and data pairs. Our numerical
experiments, including a 2D toy and a seismic compressed sensing example,
demonstrate that thanks to the preconditioning considerable speed-ups are
achievable compared to training NFs from scratch.
Related papers
- Reflected Flow Matching [36.38883647601013]
Continuous normalizing flows (CNFs) learn an ordinary differential equation to transform prior samples into data.
Flow matching (FM) has emerged as a simulation-free approach for training CNFs by regressing a velocity model towards the conditional velocity field.
We propose reflected flow matching (RFM) to train the velocity model in reflected CNFs by matching the conditional velocity fields in a simulation-free manner.
arXiv Detail & Related papers (2024-05-26T14:09:43Z) - PiRD: Physics-informed Residual Diffusion for Flow Field Reconstruction [5.06136344261226]
CNN-based methods for data fidelity enhancement rely on low-fidelity data patterns and distributions during the training phase.
Our proposed model - Physics-informed Residual Diffusion - demonstrates the capability to elevate the quality of data from both standard low-fidelity inputs.
Experimental results have shown that our approach can effectively reconstruct high-quality outcomes for two-dimensional turbulent flows without requiring retraining.
arXiv Detail & Related papers (2024-04-12T11:45:51Z) - Stable Training of Normalizing Flows for High-dimensional Variational
Inference [2.139348034155473]
Variational inference with normalizing flows (NFs) is an increasingly popular alternative to MCMC methods.
In practice, training deep normalizing flows for approximating high-dimensional distributions is often infeasible due to the high variance of the gradients.
We show that previous methods for stabilizing the variance of gradient descent can be insufficient to achieve stable training of Real NVPs.
arXiv Detail & Related papers (2024-02-26T09:04:07Z) - Domain Generalization Guided by Gradient Signal to Noise Ratio of
Parameters [69.24377241408851]
Overfitting to the source domain is a common issue in gradient-based training of deep neural networks.
We propose to base the selection on gradient-signal-to-noise ratio (GSNR) of network's parameters.
arXiv Detail & Related papers (2023-10-11T10:21:34Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Improving and generalizing flow-based generative models with minibatch
optimal transport [90.01613198337833]
We introduce the generalized conditional flow matching (CFM) technique for continuous normalizing flows (CNFs)
CFM features a stable regression objective like that used to train the flow in diffusion models but enjoys the efficient inference of deterministic flow models.
A variant of our objective is optimal transport CFM (OT-CFM), which creates simpler flows that are more stable to train and lead to faster inference.
arXiv Detail & Related papers (2023-02-01T14:47:17Z) - Debiased Fine-Tuning for Vision-language Models by Prompt Regularization [50.41984119504716]
We present a new paradigm for fine-tuning large-scale vision pre-trained models on downstream task, dubbed Prompt Regularization (ProReg)
ProReg uses the prediction by prompting the pretrained model to regularize the fine-tuning.
We show the consistently strong performance of ProReg compared with conventional fine-tuning, zero-shot prompt, prompt tuning, and other state-of-the-art methods.
arXiv Detail & Related papers (2023-01-29T11:53:55Z) - Sample-Efficient Optimisation with Probabilistic Transformer Surrogates [66.98962321504085]
This paper investigates the feasibility of employing state-of-the-art probabilistic transformers in Bayesian optimisation.
We observe two drawbacks stemming from their training procedure and loss definition, hindering their direct deployment as proxies in black-box optimisation.
We introduce two components: 1) a BO-tailored training prior supporting non-uniformly distributed points, and 2) a novel approximate posterior regulariser trading-off accuracy and input sensitivity to filter favourable stationary points for improved predictive performance.
arXiv Detail & Related papers (2022-05-27T11:13:17Z) - Learning by example: fast reliability-aware seismic imaging with
normalizing flows [0.76146285961466]
We train a normalizing flow (NF) capable of cheaply sampling the posterior distribution given previously unseen seismic data from neighboring surveys.
We use these samples to compute a high-fidelity image including a first assessment of the image's reliability.
arXiv Detail & Related papers (2021-04-13T15:13:45Z) - Learning Likelihoods with Conditional Normalizing Flows [54.60456010771409]
Conditional normalizing flows (CNFs) are efficient in sampling and inference.
We present a study of CNFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x)
arXiv Detail & Related papers (2019-11-29T19:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.