Provable Mixed-Noise Learning with Flow-Matching
- URL: http://arxiv.org/abs/2508.18122v2
- Date: Thu, 16 Oct 2025 13:17:05 GMT
- Title: Provable Mixed-Noise Learning with Flow-Matching
- Authors: Paul Hagemann, Robert Gruhlke, Bernhard Stankewitz, Claudia Schillings, Gabriele Steidl,
- Abstract summary: We study inverse problems with mixed noise, modeled as a combination of additive and multiplicative Gaussian components.<n>Motivated by recent advances in flow-based generative modeling, we propose a novel inference framework based on conditional flow matching embedded within an Expectation-Maximization (EM) algorithm.
- Score: 6.687571760931077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study Bayesian inverse problems with mixed noise, modeled as a combination of additive and multiplicative Gaussian components. While traditional inference methods often assume fixed or known noise characteristics, real-world applications, particularly in physics and chemistry, frequently involve noise with unknown and heterogeneous structure. Motivated by recent advances in flow-based generative modeling, we propose a novel inference framework based on conditional flow matching embedded within an Expectation-Maximization (EM) algorithm to jointly estimate posterior samplers and noise parameters. To enable high-dimensional inference and improve scalability, we use simulation-free ODE-based flow matching as the generative model in the E-step of the EM algorithm. We prove that, under suitable assumptions, the EM updates converge to the true noise parameters in the population limit of infinite observations. Our numerical results illustrate the effectiveness of combining EM inference with flow matching for mixed-noise Bayesian inverse problems.
Related papers
- Mitigating the Noise Shift for Denoising Generative Models via Noise Awareness Guidance [54.88271057438763]
Noise Awareness Guidance (NAG) is a correction method that explicitly steers sampling trajectories to remain consistent with the pre-defined noise schedule.<n>NAG consistently mitigates noise shift and substantially improves the generation quality of mainstream diffusion models.
arXiv Detail & Related papers (2025-10-14T13:31:34Z) - ConDiSim: Conditional Diffusion Models for Simulation Based Inference [2.1493648495606354]
ConDiSim is a conditional diffusion model for simulation-based inference of complex systems with intractable likelihoods.<n>It is evaluated across ten benchmark problems and two real-world test problems, where it demonstrates effective posterior approximation accuracy.
arXiv Detail & Related papers (2025-05-13T09:58:23Z) - Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control [26.195547996552406]
We cast reward fine-tuning as optimal control (SOC) for dynamical generative models that produce samples through an iterative process.<n>We find that our approach significantly improves over existing methods for reward fine-tuning, achieving better consistency, realism, and generalization to unseen human preference reward models.
arXiv Detail & Related papers (2024-09-13T14:22:14Z) - Bayesian Inference of General Noise Model Parameters from Surface Code's Syndrome Statistics [0.0]
We propose general noise model Bayesian inference methods that integrate the surface code's tensor network simulator.
For stationary noise, where the noise parameters are constant and do not change, we propose a method based on the Markov chain Monte Carlo.
For time-varying noise, which is a more realistic situation, we introduce another method based on the sequential Monte Carlo.
arXiv Detail & Related papers (2024-06-13T10:26:04Z) - Blue noise for diffusion models [50.99852321110366]
We introduce a novel and general class of diffusion models taking correlated noise within and across images into account.
Our framework allows introducing correlation across images within a single mini-batch to improve gradient flow.
We perform both qualitative and quantitative evaluations on a variety of datasets using our method.
arXiv Detail & Related papers (2024-02-07T14:59:25Z) - Mixed Noise and Posterior Estimation with Conditional DeepGEM [1.1650821883155187]
We develop a novel algorithm for jointly estimating the posterior and the noise parameters in inverse problems.
We show that our model is able to incorporate information from many measurements, unlike previous approaches.
arXiv Detail & Related papers (2024-02-05T12:42:21Z) - DiffusionAD: Norm-guided One-step Denoising Diffusion for Anomaly Detection [80.20339155618612]
DiffusionAD is a novel anomaly detection pipeline comprising a reconstruction sub-network and a segmentation sub-network.<n>A rapid one-step denoising paradigm achieves hundreds of times acceleration while preserving comparable reconstruction quality.<n>Considering the diversity in the manifestation of anomalies, we propose a norm-guided paradigm to integrate the benefits of multiple noise scales.
arXiv Detail & Related papers (2023-03-15T16:14:06Z) - Diffusion Posterior Sampling for General Noisy Inverse Problems [50.873313752797124]
We extend diffusion solvers to handle noisy (non)linear inverse problems via approximation of the posterior sampling.
Our method demonstrates that diffusion models can incorporate various measurement noise statistics.
arXiv Detail & Related papers (2022-09-29T11:12:27Z) - Multiview point cloud registration with anisotropic and space-varying
localization noise [1.5499426028105903]
We address the problem of registering multiple point clouds corrupted with high anisotropic localization noise.
Existing methods are based on an implicit assumption of space-invariant isotropic noise.
We show that our noise handling strategy improves significantly the robustness to high levels of anisotropic noise.
arXiv Detail & Related papers (2022-01-03T15:21:24Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Shape Matters: Understanding the Implicit Bias of the Noise Covariance [76.54300276636982]
Noise in gradient descent provides a crucial implicit regularization effect for training over parameterized models.
We show that parameter-dependent noise -- induced by mini-batches or label perturbation -- is far more effective than Gaussian noise.
Our analysis reveals that parameter-dependent noise introduces a bias towards local minima with smaller noise variance, whereas spherical Gaussian noise does not.
arXiv Detail & Related papers (2020-06-15T18:31:02Z) - Solving Inverse Problems with a Flow-based Noise Model [100.18560761392692]
We study image inverse problems with a normalizing flow prior.
Our formulation views the solution as the maximum a posteriori estimate of the image conditioned on the measurements.
We empirically validate the efficacy of our method on various inverse problems, including compressed sensing with quantized measurements and denoising with highly structured noise patterns.
arXiv Detail & Related papers (2020-03-18T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.