The Ensemble Inverse Problem: Applications and Methods
- URL: http://arxiv.org/abs/2601.22029v1
- Date: Thu, 29 Jan 2026 17:34:41 GMT
- Title: The Ensemble Inverse Problem: Applications and Methods
- Authors: Zhengyan Huan, Camila Pazos, Martin Klassen, Vincent Croft, Pierre-Hugues Beauchemin, Shuchin Aeron,
- Abstract summary: The aim of EIP is to invert for an ensemble that is distributed according to the pushforward of a prior under a forward process.<n>The EIP also arises in full waveform inversion (FWI) and inverse imaging with unknown priors.<n>We propose non-iterative inference-time methods that construct posterior samplers based on a new class of conditional generative models.
- Score: 5.033660455789586
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a new multivariate statistical problem that we refer to as the Ensemble Inverse Problem (EIP). The aim of EIP is to invert for an ensemble that is distributed according to the pushforward of a prior under a forward process. In high energy physics (HEP), this is related to a widely known problem called unfolding, which aims to reconstruct the true physics distribution of quantities, such as momentum and angle, from measurements that are distorted by detector effects. In recent applications, the EIP also arises in full waveform inversion (FWI) and inverse imaging with unknown priors. We propose non-iterative inference-time methods that construct posterior samplers based on a new class of conditional generative models, which we call ensemble inverse generative models. For the posterior modeling, these models additionally use the ensemble information contained in the observation set on top of single measurements. Unlike existing methods, our proposed methods avoid explicit and iterative use of the forward model at inference time via training across several sets of truth-observation pairs that are consistent with the same forward model, but originate from a wide range of priors. We demonstrate that this training procedure implicitly encodes the likelihood model. The use of ensemble information helps posterior inference and enables generalization to unseen priors. We benchmark the proposed method on several synthetic and real datasets in inverse imaging, HEP, and FWI. The codes are available at https://github.com/ZhengyanHuan/The-Ensemble-Inverse-Problem--Applications-and-Methods.
Related papers
- Unifying and extending Diffusion Models through PDEs for solving Inverse Problems [3.1225172236361165]
Diffusion models have emerged as powerful generative tools with applications in computer vision and scientific machine learning (SciML)<n>Traditionally, these models have been derived using principles of variational inference, denoising, statistical signal processing, and differential equations.<n>In this study we derive diffusion models using ideas from linear partial differential equations and demonstrate that this approach has several benefits.
arXiv Detail & Related papers (2025-04-10T04:07:36Z) - Amortized In-Context Bayesian Posterior Estimation [15.714462115687096]
Amortization, through conditional estimation, is a viable strategy to alleviate such difficulties.<n>We conduct a thorough comparative analysis of amortized in-context Bayesian posterior estimation methods.<n>We highlight the superiority of the reverse KL estimator for predictive problems, especially when combined with the transformer architecture and normalizing flows.
arXiv Detail & Related papers (2025-02-10T16:00:48Z) - Sparse Bayesian Generative Modeling for Compressive Sensing [8.666730973498625]
This work addresses the fundamental linear inverse problem in compressive sensing (CS) by introducing a new type of regularizing generative prior.
We support our approach theoretically through the concept of variational inference and validate it empirically using different types of compressible signals.
arXiv Detail & Related papers (2024-11-14T14:37:47Z) - Amortized Posterior Sampling with Diffusion Prior Distillation [55.03585818289934]
Amortized Posterior Sampling is a novel variational inference approach for efficient posterior sampling in inverse problems.<n>Our method trains a conditional flow model to minimize the divergence between the variational distribution and the posterior distribution implicitly defined by the diffusion model.<n>Unlike existing methods, our approach is unsupervised, requires no paired training data, and is applicable to both Euclidean and non-Euclidean domains.
arXiv Detail & Related papers (2024-07-25T09:53:12Z) - Bayesian Circular Regression with von Mises Quasi-Processes [57.88921637944379]
In this work we explore a family of expressive and interpretable distributions over circle-valued random functions.<n>For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Gibbs sampling.<n>We present experiments applying this model to the prediction of wind directions and the percentage of the running gait cycle as a function of joint angles.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Solving Inverse Problems with Model Mismatch using Untrained Neural Networks within Model-based Architectures [14.551812310439004]
We introduce an untrained forward model residual block within the model-based architecture to match the data consistency in the measurement domain for each instance.
Our approach offers a unified solution that is less parameter-sensitive, requires no additional data, and enables simultaneous fitting of the forward model and reconstruction in a single pass.
arXiv Detail & Related papers (2024-03-07T19:02:13Z) - Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Mixture Manifold Networks: A Computationally Efficient Baseline for
Inverse Modeling [7.891408798179181]
We propose and show the efficacy of a new method to address generic inverse problems.
Recent work has shown impressive results using deep learning, but we note that there is a trade-off between model performance and computational time.
arXiv Detail & Related papers (2022-11-25T20:18:07Z) - JPEG Artifact Correction using Denoising Diffusion Restoration Models [110.1244240726802]
We build upon Denoising Diffusion Restoration Models (DDRM) and propose a method for solving some non-linear inverse problems.
We leverage the pseudo-inverse operator used in DDRM and generalize this concept for other measurement operators.
arXiv Detail & Related papers (2022-09-23T23:47:00Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.