DM4CT: Benchmarking Diffusion Models for Computed Tomography Reconstruction
- URL: http://arxiv.org/abs/2602.18589v1
- Date: Fri, 20 Feb 2026 19:54:47 GMT
- Title: DM4CT: Benchmarking Diffusion Models for Computed Tomography Reconstruction
- Authors: Jiayang Shi, Daniel M. Pelt, K. Joost Batenburg,
- Abstract summary: DM4CT is a comprehensive benchmark for computed tomography reconstruction.<n>We benchmark ten recent diffusion-based methods alongside seven strong baselines, including model-based, unsupervised, and supervised approaches.<n>Our analysis provides detailed insights into the behavior, strengths, and limitations of diffusion models for CT reconstruction.
- Score: 0.8921166277011348
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion models have recently emerged as powerful priors for solving inverse problems. While computed tomography (CT) is theoretically a linear inverse problem, it poses many practical challenges. These include correlated noise, artifact structures, reliance on system geometry, and misaligned value ranges, which make the direct application of diffusion models more difficult than in domains like natural image generation. To systematically evaluate how diffusion models perform in this context and compare them with established reconstruction methods, we introduce DM4CT, a comprehensive benchmark for CT reconstruction. DM4CT includes datasets from both medical and industrial domains with sparse-view and noisy configurations. To explore the challenges of deploying diffusion models in practice, we additionally acquire a high-resolution CT dataset at a high-energy synchrotron facility and evaluate all methods under real experimental conditions. We benchmark ten recent diffusion-based methods alongside seven strong baselines, including model-based, unsupervised, and supervised approaches. Our analysis provides detailed insights into the behavior, strengths, and limitations of diffusion models for CT reconstruction. The real-world dataset is publicly available at zenodo.org/records/15420527, and the codebase is open-sourced at github.com/DM4CT/DM4CT.
Related papers
- Tunable-Generalization Diffusion Powered by Self-Supervised Contextual Sub-Data for Low-Dose CT Reconstruction [5.107409624991683]
TUnable-geneRalizatioN Diffusion (TurnDiff) is powered by self-supervised contextual sub-data for low-dose CT reconstruction.<n>TurnDiff consistently outperforms state-of-the-art methods in both reconstruction and generalization.
arXiv Detail & Related papers (2025-09-28T13:50:29Z) - Direct Dual-Energy CT Material Decomposition using Model-based Denoising Diffusion Model [105.95160543743984]
We propose a deep learning procedure called Dual-Energy Decomposition Model-based Diffusion (DEcomp-MoD) for quantitative material decomposition.<n>We show that DEcomp-MoD outperform state-of-the-art unsupervised score-based model and supervised deep learning networks.
arXiv Detail & Related papers (2025-07-24T01:00:06Z) - Noise-Inspired Diffusion Model for Generalizable Low-Dose CT Reconstruction [37.71732274622662]
We propose a noise-inspired diffusion model for generalizable low-dose CT reconstruction, termed NEED, which tailors diffusion models for noise characteristics of each domain.<n>By cascading these two diffusion models for dual-domain reconstruction, our NEED requires only normal-dose data for training and can be effectively extended to various unseen dose levels during testing.
arXiv Detail & Related papers (2025-06-27T08:24:55Z) - Tomographic Foundation Model -- FORCE: Flow-Oriented Reconstruction Conditioning Engine [9.228750443979733]
Deep learning has significantly advanced CT image reconstruction.<n>Deep learning methods can perform well with approximately paired data, but they inherently carry the risk of hallucination.<n>We propose a novel CT framework: Flow-Oriented Reconstruction Conditioning Engine (FORCE)
arXiv Detail & Related papers (2025-06-02T18:25:12Z) - Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.<n>We evaluate our method on three public longitudinal benchmark datasets of brain MRI and chest X-rays for counterfactual image generation.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - DPER: Diffusion Prior Driven Neural Representation for Limited Angle and Sparse View CT Reconstruction [45.00528216648563]
Diffusion Prior Driven Neural Representation (DPER) is an unsupervised framework designed to address the exceptionally ill-posed CT reconstruction inverse problems.
DPER adopts the Half Quadratic Splitting (HQS) algorithm to decompose the inverse problem into data fidelity and distribution prior sub-problems.
We conduct comprehensive experiments to evaluate the performance of DPER on LACT and ultra-SVCT reconstruction with two public datasets.
arXiv Detail & Related papers (2024-04-27T12:55:13Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Unsupervised CT Metal Artifact Reduction by Plugging Diffusion Priors in
Dual Domains [8.40564813751161]
metallic implants often cause disruptive artifacts in computed tomography (CT) images, impeding accurate diagnosis.
Several supervised deep learning-based approaches have been proposed for reducing metal artifacts (MAR)
We propose an unsupervised MAR method based on the diffusion model, a generative model with a high capacity to represent data distributions.
arXiv Detail & Related papers (2023-08-31T14:00:47Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in Medical Image Reconstruction [75.91471250967703]
We introduce a novel sampling framework called Steerable Conditional Diffusion.<n>This framework adapts the diffusion model, concurrently with image reconstruction, based solely on the information provided by the available measurement.<n>We achieve substantial enhancements in out-of-distribution performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.