Prior-Guided Residual Diffusion: Calibrated and Efficient Medical Image Segmentation
- URL: http://arxiv.org/abs/2509.01330v1
- Date: Mon, 01 Sep 2025 10:13:15 GMT
- Title: Prior-Guided Residual Diffusion: Calibrated and Efficient Medical Image Segmentation
- Authors: Fuyou Mao, Beining Wu, Yanfeng Jiang, Han Xue, Yan Tang, Hao Zhang,
- Abstract summary: Prior-Guided Residual Diffusion (PGRD) is a diffusion-based framework that learns voxel-wise distributions.<n> evaluated on representative MRI and CT datasets.
- Score: 11.375625987308927
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ambiguity in medical image segmentation calls for models that capture full conditional distributions rather than a single point estimate. We present Prior-Guided Residual Diffusion (PGRD), a diffusion-based framework that learns voxel-wise distributions while maintaining strong calibration and practical sampling efficiency. PGRD embeds discrete labels as one-hot targets in a continuous space to align segmentation with diffusion modeling. A coarse prior predictor provides step-wise guidance; the diffusion network then learns the residual to the prior, accelerating convergence and improving calibration. A deep diffusion supervision scheme further stabilizes training by supervising intermediate time steps. Evaluated on representative MRI and CT datasets, PGRD achieves higher Dice scores and lower NLL/ECE values than Bayesian, ensemble, Probabilistic U-Net, and vanilla diffusion baselines, while requiring fewer sampling steps to reach strong performance.
Related papers
- Distributional Reinforcement Learning with Diffusion Bridge Critics [57.70134665595571]
We propose a novel distributional reinforcement learning method with Diffusion Bridge Critics (DBC)<n>DBC directly models the inverse cumulative distribution function (CDF) of the Q value.<n>We derive an analytic integral formula to address discretization errors in DBC.
arXiv Detail & Related papers (2026-02-05T15:40:14Z) - Training-Free Distribution Adaptation for Diffusion Models via Maximum Mean Discrepancy Guidance [17.353524034156205]
MMD Guidance augments the reverse diffusion process with gradients of the Maximum Mean Discrepancy (MMD) between generated samples and a reference dataset.<n>Our framework naturally extends to prompt-aware adaptation in conditional generation models via product kernels.<n>Experiments on synthetic and real-world benchmarks demonstrate that MMD Guidance can achieve distributional alignment while preserving sample fidelity.
arXiv Detail & Related papers (2026-01-13T09:42:57Z) - Inference-Time Alignment for Diffusion Models via Doob's Matching [16.416975860645724]
Inference-time alignment for diffusion models aims to adapt a pre-trained diffusion model toward a target distribution without retraining the base score network.<n>We introduce Doob's matching, a novel framework for guidance estimation grounded in Doob's $h$-transform.<n>We prove non-asymptotic convergence guarantees for the generated distributions in the 2-Wasserstein distance.
arXiv Detail & Related papers (2026-01-10T10:28:06Z) - RDIT: Residual-based Diffusion Implicit Models for Probabilistic Time Series Forecasting [4.140149411004857]
RDIT is a plug-and-play framework that combines point estimation and residual-based conditional diffusion with a bidirectional Mamba network.<n>We show that RDIT achieves lower CRPS, rapid inference, and improved coverage compared to strong baselines.
arXiv Detail & Related papers (2025-09-02T14:06:29Z) - UniSegDiff: Boosting Unified Lesion Segmentation via a Staged Diffusion Model [53.34835793648352]
We propose UniSegDiff, a novel diffusion model framework for lesion segmentation.<n>UniSegDiff addresses lesion segmentation in a unified manner across multiple modalities and organs.<n> Comprehensive experimental results demonstrate that UniSegDiff significantly outperforms previous state-of-the-art (SOTA) approaches.
arXiv Detail & Related papers (2025-07-24T12:33:10Z) - Inference-Time Scaling of Diffusion Language Models with Particle Gibbs Sampling [70.8832906871441]
We study how to steer generation toward desired rewards without retraining the models.<n>Prior methods typically resample or filter within a single denoising trajectory, optimizing rewards step-by-step without trajectory-level refinement.<n>We introduce particle Gibbs sampling for diffusion language models (PG-DLM), a novel inference-time algorithm enabling trajectory-level refinement while preserving generation perplexity.
arXiv Detail & Related papers (2025-07-11T08:00:47Z) - A Generative Framework for Causal Estimation via Importance-Weighted Diffusion Distillation [55.53426007439564]
Estimating individualized treatment effects from observational data is a central challenge in causal inference.<n>In inverse probability weighting (IPW) is a well-established solution to this problem, but its integration into modern deep learning frameworks remains limited.<n>We propose Importance-Weighted Diffusion Distillation (IWDD), a novel generative framework that combines the pretraining of diffusion models with importance-weighted score distillation.
arXiv Detail & Related papers (2025-05-16T17:00:52Z) - Improving Vector-Quantized Image Modeling with Latent Consistency-Matching Diffusion [55.185588994883226]
We introduce VQ-LCMD, a continuous-space latent diffusion framework within the embedding space that stabilizes training.<n>VQ-LCMD uses a novel training objective combining the joint embedding-diffusion variational lower bound with a consistency-matching (CM) loss.<n>Experiments show that the proposed VQ-LCMD yields superior results on FFHQ, LSUN Churches, and LSUN Bedrooms compared to discrete-state latent diffusion models.
arXiv Detail & Related papers (2024-10-18T09:12:33Z) - Channel-aware Contrastive Conditional Diffusion for Multivariate Probabilistic Time Series Forecasting [19.383395337330082]
We propose a generic channel-aware Contrastive Conditional Diffusion model entitled CCDM.
The proposed CCDM can exhibit superior forecasting capability compared to current state-of-the-art diffusion forecasters.
arXiv Detail & Related papers (2024-10-03T03:13:15Z) - Manifold Preserving Guided Diffusion [121.97907811212123]
Conditional image generation still faces challenges of cost, generalizability, and the need for task-specific training.
We propose Manifold Preserving Guided Diffusion (MPGD), a training-free conditional generation framework.
arXiv Detail & Related papers (2023-11-28T02:08:06Z) - DoseDiff: Distance-aware Diffusion Model for Dose Prediction in Radiotherapy [7.934475806787889]
We propose a distance-aware diffusion model (DoseDiff) for precise prediction of dose distribution.
The results demonstrate that our DoseDiff method outperforms state-of-the-art dose prediction methods in terms of both quantitative performance and visual quality.
arXiv Detail & Related papers (2023-06-28T15:58:53Z) - How Much is Enough? A Study on Diffusion Times in Score-based Generative
Models [76.76860707897413]
Current best practice advocates for a large T to ensure that the forward dynamics brings the diffusion sufficiently close to a known and simple noise distribution.
We show how an auxiliary model can be used to bridge the gap between the ideal and the simulated forward dynamics, followed by a standard reverse diffusion process.
arXiv Detail & Related papers (2022-06-10T15:09:46Z) - Diffusion-GAN: Training GANs with Diffusion [135.24433011977874]
Generative adversarial networks (GANs) are challenging to train stably.
We propose Diffusion-GAN, a novel GAN framework that leverages a forward diffusion chain to generate instance noise.
We show that Diffusion-GAN can produce more realistic images with higher stability and data efficiency than state-of-the-art GANs.
arXiv Detail & Related papers (2022-06-05T20:45:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.