Tunable-Generalization Diffusion Powered by Self-Supervised Contextual Sub-Data for Low-Dose CT Reconstruction
- URL: http://arxiv.org/abs/2509.23885v2
- Date: Thu, 30 Oct 2025 12:02:27 GMT
- Title: Tunable-Generalization Diffusion Powered by Self-Supervised Contextual Sub-Data for Low-Dose CT Reconstruction
- Authors: Guoquan Wei, Liu Shi, Zekun Zhou, Wenzhe Shan, Qiegen Liu,
- Abstract summary: TUnable-geneRalizatioN Diffusion (TurnDiff) is powered by self-supervised contextual sub-data for low-dose CT reconstruction.<n>TurnDiff consistently outperforms state-of-the-art methods in both reconstruction and generalization.
- Score: 5.107409624991683
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current models based on deep learning for low-dose CT denoising rely heavily on paired data and generalize poorly. Even the more concerned diffusion models need to learn the distribution of clean data for reconstruction, which is difficult to satisfy in medical clinical applications. At the same time, self-supervised-based methods face the challenge of significant degradation of generalizability of models pre-trained for the current dose to expand to other doses. To address these issues, this work proposes a novel method of TUnable-geneRalizatioN Diffusion (TurnDiff) powered by self-supervised contextual sub-data for low-dose CT reconstruction. Firstly, a contextual subdata self-enhancing similarity strategy is designed for denoising centered on the LDCT projection domain, which provides an initial prior for the subsequent progress. Subsequently, the initial prior is used to combine knowledge distillation with a deep combination of latent diffusion models for optimizing image details. The pre-trained model is used for inference reconstruction, and the pixel-level self-correcting fusion technique is proposed for fine-grained reconstruction of the image domain to enhance the image fidelity, using the initial prior and the LDCT image as a guide. In addition, the technique is flexibly applied to the generalization of upper and lower doses or even unseen doses. Dual-domain strategy cascade for self-supervised LDCT denoising, TurnDiff requires only LDCT projection domain data for training and testing. Comprehensive evaluation on both benchmark datasets and real-world data demonstrates that TurnDiff consistently outperforms state-of-the-art methods in both reconstruction and generalization.
Related papers
- PLOT-CT: Pre-log Voronoi Decomposition Assisted Generation for Low-dose CT Reconstruction [16.194061272932903]
Low-dose computed tomography (LDCT) reconstruction is fundamentally challenged by severe noise and compromised data fidelity under reduced radiation exposure.<n>We propose PLOT-CT, a novel framework for Pre-Log vOronoi decomposiTion-assisted CT generation.<n>Our method begins by applying Voronoi decomposition to pre-log sinograms, disentangling the data into distinct underlying components, which are embedded in separate latent spaces.
arXiv Detail & Related papers (2026-02-12T06:20:23Z) - FoundDiff: Foundational Diffusion Model for Generalizable Low-Dose CT Denoising [55.04342933312839]
We propose FoundDiff, a foundational diffusion model for unified and generalizable low-dose computed tomography (CT) denoising.<n>FoundDiff employs a two-stage strategy: (i) dose-anatomy perception and (ii) adaptive denoising.<n>First, we develop a dose- and anatomy-aware contrastive language image pre-training model (DA-CLIP) to achieve robust dose and anatomy perception.<n>Second, we design a dose- and anatomy-aware diffusion model (DA-Diff) to perform adaptive and generalizable denoising.
arXiv Detail & Related papers (2025-08-24T11:03:56Z) - Direct Dual-Energy CT Material Decomposition using Model-based Denoising Diffusion Model [105.95160543743984]
We propose a deep learning procedure called Dual-Energy Decomposition Model-based Diffusion (DEcomp-MoD) for quantitative material decomposition.<n>We show that DEcomp-MoD outperform state-of-the-art unsupervised score-based model and supervised deep learning networks.
arXiv Detail & Related papers (2025-07-24T01:00:06Z) - Noise-Inspired Diffusion Model for Generalizable Low-Dose CT Reconstruction [37.71732274622662]
We propose a noise-inspired diffusion model for generalizable low-dose CT reconstruction, termed NEED, which tailors diffusion models for noise characteristics of each domain.<n>By cascading these two diffusion models for dual-domain reconstruction, our NEED requires only normal-dose data for training and can be effectively extended to various unseen dose levels during testing.
arXiv Detail & Related papers (2025-06-27T08:24:55Z) - Re-Visible Dual-Domain Self-Supervised Deep Unfolding Network for MRI Reconstruction [48.30341580103962]
We propose a novel re-visible dual-domain self-supervised deep unfolding network to address these issues.<n>We design a deep unfolding network based on Chambolle and Pock Proximal Point Algorithm (DUN-CP-PPA) to achieve end-to-end reconstruction.<n> Experiments conducted on the fastMRI and IXI datasets demonstrate that our method significantly outperforms state-of-the-art approaches in terms of reconstruction performance.
arXiv Detail & Related papers (2025-01-07T12:29:32Z) - Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.<n>We evaluate our method on three public longitudinal benchmark datasets of brain MRI and chest X-rays for counterfactual image generation.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - Partitioned Hankel-based Diffusion Models for Few-shot Low-dose CT Reconstruction [10.158713017984345]
We propose a few-shot low-dose CT reconstruction method using Partitioned Hankel-based Diffusion (PHD) models.
In the iterative reconstruction stage, an iterative differential equation solver is employed along with data consistency constraints to update the acquired projection data.
The results approximate those of normaldose counterparts, validating PHD model as an effective and practical model for reducing artifacts and noise while preserving image quality.
arXiv Detail & Related papers (2024-05-27T13:44:53Z) - DPER: Diffusion Prior Driven Neural Representation for Limited Angle and Sparse View CT Reconstruction [45.00528216648563]
Diffusion Prior Driven Neural Representation (DPER) is an unsupervised framework designed to address the exceptionally ill-posed CT reconstruction inverse problems.
DPER adopts the Half Quadratic Splitting (HQS) algorithm to decompose the inverse problem into data fidelity and distribution prior sub-problems.
We conduct comprehensive experiments to evaluate the performance of DPER on LACT and ultra-SVCT reconstruction with two public datasets.
arXiv Detail & Related papers (2024-04-27T12:55:13Z) - JoReS-Diff: Joint Retinex and Semantic Priors in Diffusion Model for Low-light Image Enhancement [69.6035373784027]
Low-light image enhancement (LLIE) has achieved promising performance by employing conditional diffusion models.
Previous methods may neglect the importance of a sufficient formulation of task-specific condition strategy.
We propose JoReS-Diff, a novel approach that incorporates Retinex- and semantic-based priors as the additional pre-processing condition.
arXiv Detail & Related papers (2023-12-20T08:05:57Z) - ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic
Diffusion Models [69.9178140563928]
Colonoscopy analysis is essential for assisting clinical diagnosis and treatment.
The scarcity of annotated data limits the effectiveness and generalization of existing methods.
We propose an Adaptive Refinement Semantic Diffusion Model (ArSDM) to generate colonoscopy images that benefit the downstream tasks.
arXiv Detail & Related papers (2023-09-03T07:55:46Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in Medical Image Reconstruction [75.91471250967703]
We introduce a novel sampling framework called Steerable Conditional Diffusion.<n>This framework adapts the diffusion model, concurrently with image reconstruction, based solely on the information provided by the available measurement.<n>We achieve substantial enhancements in out-of-distribution performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - One Sample Diffusion Model in Projection Domain for Low-Dose CT Imaging [10.797632196651731]
Low-dose computed tomography (CT) plays a significant role in reducing the radiation risk in clinical applications.
With the rapid development and wide application of deep learning, it has brought new directions for the development of low-dose CT imaging algorithms.
We propose a fully unsupervised one sample diffusion model (OSDM)in projection domain for low-dose CT reconstruction.
The results prove that OSDM is practical and effective model for reducing the artifacts and preserving the image quality.
arXiv Detail & Related papers (2022-12-07T13:39:23Z) - Iterative Reconstruction for Low-Dose CT using Deep Gradient Priors of
Generative Model [24.024765099719886]
Iterative reconstruction is one of the most promising ways to compensate for the increased noise due to reduction of photon flux.
In this work we integrate the data-consistency as a conditional term into the iterative generative model for low-dose CT.
The distance between the reconstructed image and the manifold is minimized along with data fidelity during reconstruction.
arXiv Detail & Related papers (2020-09-27T06:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.