Conditional neural control variates for variance reduction in Bayesian inverse problems
- URL: http://arxiv.org/abs/2602.21357v1
- Date: Tue, 24 Feb 2026 20:40:20 GMT
- Title: Conditional neural control variates for variance reduction in Bayesian inverse problems
- Authors: Ali Siahkoohi, Hyunwoo Oh,
- Abstract summary: We introduce conditional neural control variates to reduce the variance of Monte Carlo estimators.<n>Training requires samples from the joint distribution of unknown parameters and observed data.<n>We validate our approach on stylized and partial differential equation-constrained Darcy flow inverse problems.
- Score: 3.3999373406262663
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian inference for inverse problems involves computing expectations under posterior distributions -- e.g., posterior means, variances, or predictive quantities -- typically via Monte Carlo (MC) estimation. When the quantity of interest varies significantly under the posterior, accurate estimates demand many samples -- a cost often prohibitive for partial differential equation-constrained problems. To address this challenge, we introduce conditional neural control variates, a modular method that learns amortized control variates from joint model-data samples to reduce the variance of MC estimators. To scale to high-dimensional problems, we leverage Stein's identity to design an architecture based on an ensemble of hierarchical coupling layers with tractable Jacobian trace computation. Training requires: (i) samples from the joint distribution of unknown parameters and observed data; and (ii) the posterior score function, which can be computed from physics-based likelihood evaluations, neural operator surrogates, or learned generative models such as conditional normalizing flows. Once trained, the control variates generalize across observations without retraining. We validate our approach on stylized and partial differential equation-constrained Darcy flow inverse problems, demonstrating substantial variance reduction, even when the analytical score is replaced by a learned surrogate.
Related papers
- An Elementary Approach to Scheduling in Generative Diffusion Models [55.171367482496755]
An elementary approach to characterizing the impact of noise scheduling and time discretization in generative diffusion models is developed.<n> Experiments across different datasets and pretrained models demonstrate that the time discretization strategy selected by our approach consistently outperforms baseline and search-based strategies.
arXiv Detail & Related papers (2026-01-20T05:06:26Z) - Efficient Covariance Estimation for Sparsified Functional Data [51.69796254617083]
proposed Random-knots (Random-knots-Spatial) and B-spline (Bspline-Spatial) estimators of the covariance function are computationally efficient.<n>Asymptotic pointwise of the covariance are obtained for sparsified individual trajectories under some regularity conditions.
arXiv Detail & Related papers (2025-11-23T00:50:33Z) - EquiReg: Equivariance Regularized Diffusion for Inverse Problems [67.01847869495558]
We propose EquiReg diffusion, a framework for regularizing posterior sampling in diffusion-based inverse problem solvers.<n>When applied to a variety of solvers, EquiReg outperforms state-of-the-art diffusion models in both linear and nonlinear image restoration tasks.
arXiv Detail & Related papers (2025-05-29T01:25:43Z) - Amortized In-Context Bayesian Posterior Estimation [15.714462115687096]
Amortization, through conditional estimation, is a viable strategy to alleviate such difficulties.<n>We conduct a thorough comparative analysis of amortized in-context Bayesian posterior estimation methods.<n>We highlight the superiority of the reverse KL estimator for predictive problems, especially when combined with the transformer architecture and normalizing flows.
arXiv Detail & Related papers (2025-02-10T16:00:48Z) - Amortized Posterior Sampling with Diffusion Prior Distillation [55.03585818289934]
Amortized Posterior Sampling is a novel variational inference approach for efficient posterior sampling in inverse problems.<n>Our method trains a conditional flow model to minimize the divergence between the variational distribution and the posterior distribution implicitly defined by the diffusion model.<n>Unlike existing methods, our approach is unsupervised, requires no paired training data, and is applicable to both Euclidean and non-Euclidean domains.
arXiv Detail & Related papers (2024-07-25T09:53:12Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
We present a unifying perspective on recent results on ridge regression.<n>We use the basic tools of random matrix theory and free probability, aimed at readers with backgrounds in physics and deep learning.<n>Our results extend and provide a unifying perspective on earlier models of scaling laws.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Leveraging Nested MLMC for Sequential Neural Posterior Estimation with Intractable Likelihoods [0.38233569758620045]
Methods aim to learn the posterior from adaptively proposed simulations using neural network-based conditional density estimators.<n>The automatic posterior transformation (APT) method proposed by Greenberg et al. performs well and scales to high-level runtime data.<n>In this paper, we reformulate APT as a nested estimation problem.<n>We construct several multi- Monte Carlo (MLMC) estimators for the loss function and its gradients to accommodate different scenarios.
arXiv Detail & Related papers (2024-01-30T06:29:41Z) - Learning to solve Bayesian inverse problems: An amortized variational inference approach using Gaussian and Flow guides [0.0]
We develop a methodology that enables real-time inference by learning the Bayesian inverse map, i.e., the map from data to posteriors.
Our approach provides the posterior distribution for a given observation just at the cost of a forward pass of the neural network.
arXiv Detail & Related papers (2023-05-31T16:25:07Z) - Introduction To Gaussian Process Regression In Bayesian Inverse
Problems, With New ResultsOn Experimental Design For Weighted Error Measures [0.0]
This work serves as an introduction to Gaussian process regression, in particular in the context of building surrogate models for inverse problems.
We show that the error between the true and approximate posterior distribution can be bounded by the error between the true and approximate likelihood, measured in the $L2$-norm weighted by the true posterior.
arXiv Detail & Related papers (2023-02-09T09:25:39Z) - Reliable amortized variational inference with physics-based latent
distribution correction [0.4588028371034407]
A neural network is trained to approximate the posterior distribution over existing pairs of model and data.
The accuracy of this approach relies on the availability of high-fidelity training data.
We show that our correction step improves the robustness of amortized variational inference with respect to changes in number of source experiments, noise variance, and shifts in the prior distribution.
arXiv Detail & Related papers (2022-07-24T02:38:54Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.