AST-n: A Fast Sampling Approach for Low-Dose CT Reconstruction using Diffusion Models
- URL: http://arxiv.org/abs/2508.09943v1
- Date: Wed, 13 Aug 2025 16:57:49 GMT
- Title: AST-n: A Fast Sampling Approach for Low-Dose CT Reconstruction using Diffusion Models
- Authors: Tomás de la Sotta, José M. Saavedra, Héctor Henríquez, Violeta Chang, Aline Xavier,
- Abstract summary: AST-n is an accelerated inference framework that initiates reverse diffusion from intermediate noise levels.<n>Conditioned models using only 25 steps (AST-25) achieve peak signal-to-noise ratio (PSNR) above 38 dB.<n>AST-n with high-order samplers enables rapid LDCT reconstruction without significant loss of image fidelity.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low-dose CT (LDCT) protocols reduce radiation exposure but increase image noise, compromising diagnostic confidence. Diffusion-based generative models have shown promise for LDCT denoising by learning image priors and performing iterative refinement. In this work, we introduce AST-n, an accelerated inference framework that initiates reverse diffusion from intermediate noise levels, and integrate high-order ODE solvers within conditioned models to further reduce sampling steps. We evaluate two acceleration paradigms--AST-n sampling and standard scheduling with high-order solvers -- on the Low Dose CT Grand Challenge dataset, covering head, abdominal, and chest scans at 10-25 % of standard dose. Conditioned models using only 25 steps (AST-25) achieve peak signal-to-noise ratio (PSNR) above 38 dB and structural similarity index (SSIM) above 0.95, closely matching standard baselines while cutting inference time from ~16 seg to under 1 seg per slice. Unconditional sampling suffers substantial quality loss, underscoring the necessity of conditioning. We also assess DDIM inversion, which yields marginal PSNR gains at the cost of doubling inference time, limiting its clinical practicality. Our results demonstrate that AST-n with high-order samplers enables rapid LDCT reconstruction without significant loss of image fidelity, advancing the feasibility of diffusion-based methods in clinical workflows.
Related papers
- MAN: Latent Diffusion Enhanced Multistage Anti-Noise Network for Efficient and High-Quality Low-Dose CT Image Denoising [8.912550844312177]
We introduce MAN, a Latent Diffusion Enhanced Multistage Anti-Noise Network for Efficient and High-Quality Low-Dose CT Image Denoising task.<n>Our method operates in a compressed latent space via a perceptually-optimized autoencoder.<n>Our work demonstrates a practical path forward for advanced generative models in medical imaging.
arXiv Detail & Related papers (2025-09-28T03:13:39Z) - PWD: Prior-Guided and Wavelet-Enhanced Diffusion Model for Limited-Angle CT [6.532073662427578]
We propose a prior information embedding and wavelet feature fusion fast sampling diffusion model for LACT reconstruction.<n>The PWD enables efficient sampling while preserving reconstruction fidelity in LACT.<n>Using only 50 sampling steps, PWD achieves at least 1.7 dB improvement in PSNR and 10% gain in SSIM.
arXiv Detail & Related papers (2025-06-30T08:28:32Z) - Super-Resolution Optical Coherence Tomography Using Diffusion Model-Based Plug-and-Play Priors [6.457037057474951]
We propose an OCT super-resolution framework based on a plug-and-play diffusion model (DM-DM) to reconstruct high-quality images from corneal measurements.<n>Our method formulates as an inverse problem, combining a prior with sparse chain Monte Carlo sampling for efficient reconstruction.
arXiv Detail & Related papers (2025-05-20T21:09:26Z) - Efficient Diffusion Model for Image Restoration by Residual Shifting [63.02725947015132]
This study proposes a novel and efficient diffusion model for image restoration.
Our method avoids the need for post-acceleration during inference, thereby avoiding the associated performance deterioration.
Our method achieves superior or comparable performance to current state-of-the-art methods on three classical IR tasks.
arXiv Detail & Related papers (2024-03-12T05:06:07Z) - SCott: Accelerating Diffusion Models with Stochastic Consistency Distillation [74.32186107058382]
We propose Consistency Distillation (SCott) to enable accelerated text-to-image generation.<n>SCott distills the ordinary differential equation solvers-based sampling process of a pre-trained teacher model into a student.<n>On the MSCOCO-2017 5K dataset with a Stable Diffusion-V1.5 teacher, SCott achieves an FID of 21.9 with 2 sampling steps, surpassing that of the 1-step InstaFlow (23.4) and the 4-step UFOGen (22.1)
arXiv Detail & Related papers (2024-03-03T13:08:32Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - ResShift: Efficient Diffusion Model for Image Super-resolution by
Residual Shifting [70.83632337581034]
Diffusion-based image super-resolution (SR) methods are mainly limited by the low inference speed.
We propose a novel and efficient diffusion model for SR that significantly reduces the number of diffusion steps.
Our method constructs a Markov chain that transfers between the high-resolution image and the low-resolution image by shifting the residual.
arXiv Detail & Related papers (2023-07-23T15:10:02Z) - Simultaneous Image-to-Zero and Zero-to-Noise: Diffusion Models with Analytical Image Attenuation [53.04220377034574]
We propose incorporating an analytical image attenuation process into the forward diffusion process for high-quality (un)conditioned image generation.<n>Our method represents the forward image-to-noise mapping as simultaneous textitimage-to-zero mapping and textitzero-to-noise mapping.<n>We have conducted experiments on unconditioned image generation, textite.g., CIFAR-10 and CelebA-HQ-256, and image-conditioned downstream tasks such as super-resolution, saliency detection, edge detection, and image inpainting.
arXiv Detail & Related papers (2023-06-23T18:08:00Z) - Parallel Sampling of Diffusion Models [76.3124029406809]
Diffusion models are powerful generative models but suffer from slow sampling.
We present ParaDiGMS, a novel method to accelerate the sampling of pretrained diffusion models by denoising multiple steps in parallel.
arXiv Detail & Related papers (2023-05-25T17:59:42Z) - CoreDiff: Contextual Error-Modulated Generalized Diffusion Model for
Low-Dose CT Denoising and Generalization [41.64072751889151]
Low-dose computed tomography (LDCT) images suffer from noise and artifacts due to photon starvation and electronic noise.
This paper presents a novel COntextual eRror-modulated gEneralized Diffusion model for low-dose CT (LDCT) denoising, termed CoreDiff.
arXiv Detail & Related papers (2023-04-04T14:13:13Z) - Low-Dose CT Using Denoising Diffusion Probabilistic Model for 20$\times$
Speedup [8.768546646716771]
We introduce the conditional denoising diffusion probabilistic model (DDPM) and show encouraging results with a high computational efficiency.
Experiments show that the accelerated DDPM can achieve 20x speedup without compromising image quality.
arXiv Detail & Related papers (2022-09-29T23:35:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.