Physics-Guided Null-Space Diffusion with Sparse Masking for Corrective Sparse-View CT Reconstruction
- URL: http://arxiv.org/abs/2509.05992v2
- Date: Sun, 28 Sep 2025 10:12:00 GMT
- Title: Physics-Guided Null-Space Diffusion with Sparse Masking for Corrective Sparse-View CT Reconstruction
- Authors: Zekun Zhou, Yanru Gong, Liu Shi, Qiegen Liu,
- Abstract summary: Diffusion models have demonstrated remarkable generative capabilities in image processing tasks.<n>We propose a Sparse condition Temporal Rewighted Integrated Distribution Estimation guided diffusion model (STRIDE) for sparse-view CT reconstruction.<n> Experimental results on both public and real datasets demonstrate that the proposed method achieves the best improvement of 2.58 dB in PSNR, increase of 2.37% in SSIM, and reduction of 0.236 in MSE.
- Score: 5.479463752172751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models have demonstrated remarkable generative capabilities in image processing tasks. We propose a Sparse condition Temporal Rewighted Integrated Distribution Estimation guided diffusion model (STRIDE) for sparse-view CT reconstruction. Specifically, we design a joint training mechanism guided by sparse conditional probabilities to facilitate the model effective learning of missing projection view completion and global information modeling. Based on systematic theoretical analysis, we propose a temporally varying sparse condition reweighting guidance strategy to dynamically adjusts weights during the progressive denoising process from pure noise to the real image, enabling the model to progressively perceive sparse-view information. The linear regression is employed to correct distributional shifts between known and generated data, mitigating inconsistencies arising during the guidance process. Furthermore, we construct a dual-network parallel architecture to perform global correction and optimization across multiple sub-frequency components, thereby effectively improving the model capability in both detail restoration and structural preservation, ultimately achieving high-quality image reconstruction. Experimental results on both public and real datasets demonstrate that the proposed method achieves the best improvement of 2.58 dB in PSNR, increase of 2.37\% in SSIM, and reduction of 0.236 in MSE compared to the best-performing baseline methods. The reconstructed images exhibit excellent generalization and robustness in terms of structural consistency, detail restoration, and artifact suppression.
Related papers
- Implicit Neural Representation-Based Continuous Single Image Super Resolution: An Empirical Study [50.15623093332659]
Implicit neural representation (INR) has become the standard approach for arbitrary-scale image super-resolution (ASSR)<n>We compare existing techniques across diverse settings and present aggregated performance results on multiple image quality metrics.<n>We examine a new loss function that penalizes intensity variations while preserving edges, textures, and finer details during training.
arXiv Detail & Related papers (2026-01-25T07:09:20Z) - GloTok: Global Perspective Tokenizer for Image Reconstruction and Generation [51.95701097588426]
We introduce a Global Perspective Tokenizer (GloTok) to model a more uniform semantic distribution of tokenized features.<n>A residual learning module is proposed to recover the fine-grained details to minimize the reconstruction error caused by quantization.<n>Experiments on the standard ImageNet-1k benchmark clearly show that our proposed method achieves state-of-the-art reconstruction performance and generation quality.
arXiv Detail & Related papers (2025-11-18T06:40:26Z) - Interlaced dynamic XCT reconstruction with spatio-temporal implicit neural representations [0.0]
We investigate the use of of-Implicit Neural Representations for dynamic X-ray computed tomography (XCT) reconstruction under interlaced acquisition schemes.<n>The proposed approach combines ADMM-based optimization with INCODE, a conditioning framework incorporating prior knowledge, to enable efficient convergence.<n>Across all settings, our model achieves strong performance robustness and outperforms Time-Inter Model-Based Iter Reconstruction (TIMBIR), a state-of-the-art model-based iterative method.
arXiv Detail & Related papers (2025-10-09T01:33:58Z) - Noise-Inspired Diffusion Model for Generalizable Low-Dose CT Reconstruction [37.71732274622662]
We propose a noise-inspired diffusion model for generalizable low-dose CT reconstruction, termed NEED, which tailors diffusion models for noise characteristics of each domain.<n>By cascading these two diffusion models for dual-domain reconstruction, our NEED requires only normal-dose data for training and can be effectively extended to various unseen dose levels during testing.
arXiv Detail & Related papers (2025-06-27T08:24:55Z) - FoundIR: Unleashing Million-scale Training Data to Advance Foundation Models for Image Restoration [66.61201445650323]
Existing methods suffer from a generalization bottleneck in real-world scenarios.<n>We contribute a million-scale dataset with two notable advantages over existing training data.<n>We propose a robust model, FoundIR, to better address a broader range of restoration tasks in real-world scenarios.
arXiv Detail & Related papers (2024-12-02T12:08:40Z) - Latent Diffusion Prior Enhanced Deep Unfolding for Snapshot Spectral Compressive Imaging [17.511583657111792]
Snapshot spectral imaging reconstruction aims to reconstruct three-dimensional spatial-spectral images from a single-shot two-dimensional compressed measurement.
We introduce a generative model, namely the latent diffusion model (LDM), to generate degradation-free prior to deep unfolding method.
arXiv Detail & Related papers (2023-11-24T04:55:20Z) - DiffSCI: Zero-Shot Snapshot Compressive Imaging via Iterative Spectral
Diffusion Model [18.25548360119976]
This paper endeavors to advance the precision of snapshot compressive imaging (SCI) reconstruction for multispectral image (MSI)
We propose a novel structured zero-shot diffusion model, dubbed DiffSCI.
We present extensive testing to show that DiffSCI exhibits discernible performance enhancements over prevailing self-supervised and zero-shot approaches.
arXiv Detail & Related papers (2023-11-19T20:27:14Z) - Data-iterative Optimization Score Model for Stable Ultra-Sparse-View CT
Reconstruction [2.2336243882030025]
We propose an iterative optimization data scoring model (DOSM) for sparse-view CT reconstruction.
DOSM integrates data consistency into its data consistency element, effectively balancing measurement data and generative model constraints.
We leverage conventional techniques to optimize DOSM updates.
arXiv Detail & Related papers (2023-08-28T09:23:18Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in Medical Image Reconstruction [75.91471250967703]
We introduce a novel sampling framework called Steerable Conditional Diffusion.<n>This framework adapts the diffusion model, concurrently with image reconstruction, based solely on the information provided by the available measurement.<n>We achieve substantial enhancements in out-of-distribution performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - ADIR: Adaptive Diffusion for Image Reconstruction [42.90778718695398]
Denoising diffusion models have recently achieved remarkable success in image generation, capturing rich information about natural image statistics.<n>We introduce a conditional sampling framework that leverages the powerful priors learned by diffusion models while enforcing consistency with the available measurements.<n>We employ LoRA-based adaptation using images that are semantically and visually similar to the degraded input, efficiently retrieved from a large and diverse dataset.
arXiv Detail & Related papers (2022-12-06T18:39:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.