Decoupled Diffusion Sampling for Inverse Problems on Function Spaces
- URL: http://arxiv.org/abs/2601.23280v1
- Date: Fri, 30 Jan 2026 18:54:49 GMT
- Title: Decoupled Diffusion Sampling for Inverse Problems on Function Spaces
- Authors: Thomas Y. L. Lin, Jiachen Yao, Lufang Chiang, Julius Berner, Anima Anandkumar,
- Abstract summary: Existing plug-and-play diffusion posterior samplers represent physics implicitly through coefficient joint-solution modeling.<n>We propose a physics-aware generative framework in function space for inverse PDE problems.<n>Our Decoupled Diffusion Inverse Solver (DDIS) employs a decoupled design: an unconditional diffusion learns the coefficient prior, while a neural operator explicitly models the forward PDE for guidance.
- Score: 73.52103661482242
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a data-efficient, physics-aware generative framework in function space for inverse PDE problems. Existing plug-and-play diffusion posterior samplers represent physics implicitly through joint coefficient-solution modeling, requiring substantial paired supervision. In contrast, our Decoupled Diffusion Inverse Solver (DDIS) employs a decoupled design: an unconditional diffusion learns the coefficient prior, while a neural operator explicitly models the forward PDE for guidance. This decoupling enables superior data efficiency and effective physics-informed learning, while naturally supporting Decoupled Annealing Posterior Sampling (DAPS) to avoid over-smoothing in Diffusion Posterior Sampling (DPS). Theoretically, we prove that DDIS avoids the guidance attenuation failure of joint models when training data is scarce. Empirically, DDIS achieves state-of-the-art performance under sparse observation, improving $l_2$ error by 11% and spectral error by 54% on average; when data is limited to 1%, DDIS maintains accuracy with 40% advantage in $l_2$ error compared to joint models.
Related papers
- Function-Space Decoupled Diffusion for Forward and Inverse Modeling in Carbon Capture and Storage [65.51149575007149]
We present Fun-DDPS, a generative framework that combines function-space diffusion models with differentiable neural operator surrogates for both forward and inverse modeling.<n>Fun-DDPS produces physically consistent realizations free from the high-frequency artifacts observed in joint-state baselines.
arXiv Detail & Related papers (2026-02-12T18:58:12Z) - Compositional Generation for Long-Horizon Coupled PDEs [0.764671395172401]
We study compositional diffusion approaches where diffusion models are only trained on the decoupled PDE data.<n>We investigate whether the compositional strategy can be feasible under long time horizons involving a large number of time steps.<n>We show that compositional diffusion is a viable strategy towards efficient, long-horizon modeling of coupled PDEs.
arXiv Detail & Related papers (2025-10-23T02:35:25Z) - DS-Diffusion: Data Style-Guided Diffusion Model for Time-Series Generation [3.7098771725459336]
We propose a data style-guided diffusion model (DS-Diffusion) for time series generation tasks.<n>The DS-Diffusion avoids retraining the entire framework to introduce conditional guidance.<n>The generated samples can clearly indicate the data style from which they originate.
arXiv Detail & Related papers (2025-09-23T03:06:39Z) - Efficient Federated Learning with Heterogeneous Data and Adaptive Dropout [62.73150122809138]
Federated Learning (FL) is a promising distributed machine learning approach that enables collaborative training of a global model using multiple edge devices.<n>We propose the FedDHAD FL framework, which comes with two novel methods: Dynamic Heterogeneous model aggregation (FedDH) and Adaptive Dropout (FedAD)<n>The combination of these two methods makes FedDHAD significantly outperform state-of-the-art solutions in terms of accuracy (up to 6.7% higher), efficiency (up to 2.02 times faster), and cost (up to 15.0% smaller)
arXiv Detail & Related papers (2025-07-14T16:19:00Z) - Guided Diffusion Sampling on Function Spaces with Applications to PDEs [112.09025802445329]
We propose a general framework for conditional sampling in PDE-based inverse problems.<n>This is accomplished by a function-space diffusion model and plug-and-play guidance for conditioning.<n>Our method achieves an average 32% accuracy improvement over state-of-the-art fixed-resolution diffusion baselines.
arXiv Detail & Related papers (2025-05-22T17:58:12Z) - ADT: Tuning Diffusion Models with Adversarial Supervision [16.974169058917443]
Diffusion models have achieved outstanding image generation by reversing a forward noising process to approximate true data distributions.<n>We propose Adrial Diffusion Tuning (ADT) to stimulate the inference process during optimization and align the final outputs with training data.<n>ADT features a siamese-network discriminator with a fixed pre-trained backbone and lightweight trainable parameters.
arXiv Detail & Related papers (2025-04-15T17:37:50Z) - Identifying Drift, Diffusion, and Causal Structure from Temporal Snapshots [12.57987355677862]
APPEX is an iterative algorithm designed to estimate the drift, diffusion, and causal graph of an additive noise SDE, solely from temporal marginals.<n>We show that APPEX iteratively decreases Kullback-Leibler divergence to the true solution, and demonstrate its effectiveness on simulated data from linear additive noise SDEs.
arXiv Detail & Related papers (2024-10-30T06:28:21Z) - Denoising diffusion probabilistic models are optimally adaptive to unknown low dimensionality [21.10158431913811]
We investigate how the DDPM can achieve sampling speed-ups through automatic exploitation of intrinsic low dimensionality of data.
We prove that the iteration complexity of the DDPM scales nearly linearly with $k$, which is optimal when using KL divergence to measure distributional discrepancy.
arXiv Detail & Related papers (2024-10-24T14:36:12Z) - DiffPuter: Empowering Diffusion Models for Missing Data Imputation [56.48119008663155]
This paper introduces DiffPuter, a tailored diffusion model combined with the Expectation-Maximization (EM) algorithm for missing data imputation.<n>Our theoretical analysis shows that DiffPuter's training step corresponds to the maximum likelihood estimation of data density.<n>Our experiments show that DiffPuter achieves an average improvement of 6.94% in MAE and 4.78% in RMSE compared to the most competitive existing method.
arXiv Detail & Related papers (2024-05-31T08:35:56Z) - Semi-Implicit Denoising Diffusion Models (SIDDMs) [50.30163684539586]
Existing models such as Denoising Diffusion Probabilistic Models (DDPM) deliver high-quality, diverse samples but are slowed by an inherently high number of iterative steps.
We introduce a novel approach that tackles the problem by matching implicit and explicit factors.
We demonstrate that our proposed method obtains comparable generative performance to diffusion-based models and vastly superior results to models with a small number of sampling steps.
arXiv Detail & Related papers (2023-06-21T18:49:22Z) - Exploring the Optimal Choice for Generative Processes in Diffusion
Models: Ordinary vs Stochastic Differential Equations [6.2284442126065525]
We study the problem mathematically for two limiting scenarios: the zero diffusion (ODE) case and the large diffusion case.
Our findings indicate that when the perturbation occurs at the end of the generative process, the ODE model outperforms the SDE model with a large diffusion coefficient.
arXiv Detail & Related papers (2023-06-03T09:27:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.