D-PerceptCT: Deep Perceptual Enhancement for Low-Dose CT Images
- URL: http://arxiv.org/abs/2511.14518v1
- Date: Tue, 18 Nov 2025 14:14:59 GMT
- Title: D-PerceptCT: Deep Perceptual Enhancement for Low-Dose CT Images
- Authors: Taifour Yousra Nabila, Azeddine Beghdadi, Marie Luong, Zuheng Ming, Habib Zaidi, Faouzi Alaya Cheikh,
- Abstract summary: Low Dose Computed Tomography (LDCT) is widely used as an imaging solution to aid diagnosis and other clinical tasks.<n>This comes at the price of a deterioration in image quality due to the low dose of radiation used to reduce the risk of secondary cancer development.<n>We introduce D-PerceptCT, a novel architecture inspired by key principles of the Human Visual System (HVS) to enhance LDCT images.
- Score: 3.982360641359205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low Dose Computed Tomography (LDCT) is widely used as an imaging solution to aid diagnosis and other clinical tasks. However, this comes at the price of a deterioration in image quality due to the low dose of radiation used to reduce the risk of secondary cancer development. While some efficient methods have been proposed to enhance LDCT quality, many overestimate noise and perform excessive smoothing, leading to a loss of critical details. In this paper, we introduce D-PerceptCT, a novel architecture inspired by key principles of the Human Visual System (HVS) to enhance LDCT images. The objective is to guide the model to enhance or preserve perceptually relevant features, thereby providing radiologists with CT images where critical anatomical structures and fine pathological details are perceptu- ally visible. D-PerceptCT consists of two main blocks: 1) a Visual Dual-path Extractor (ViDex), which integrates semantic priors from a pretrained DINOv2 model with local spatial features, allowing the network to incorporate semantic-awareness during enhancement; (2) a Global-Local State-Space block that captures long-range information and multiscale features to preserve the important structures and fine details for diagnosis. In addition, we propose a novel deep perceptual loss, designated as the Deep Perceptual Relevancy Loss Function (DPRLF), which is inspired by human contrast sensitivity, to further emphasize perceptually important features. Extensive experiments on the Mayo2016 dataset demonstrate the effectiveness of D-PerceptCT method for LDCT enhancement, showing better preservation of structural and textural information within LDCT images compared to SOTA methods.
Related papers
- Structure-constrained Language-informed Diffusion Model for Unpaired Low-dose Computed Tomography Angiography Reconstruction [72.80209358480424]
overdose of iodinated contrast media (ICM) can cause kidney damage and life-threatening allergic reactions.<n>Deep learning methods can generate CT images of normal-dose ICM from low-dose ICM, reducing the required dose.<n>We propose a Structure-constrained Language-informed Diffusion Model (SLDM) that integrates structural synergy and spatial intelligence.
arXiv Detail & Related papers (2026-01-28T06:54:06Z) - FoundDiff: Foundational Diffusion Model for Generalizable Low-Dose CT Denoising [55.04342933312839]
We propose FoundDiff, a foundational diffusion model for unified and generalizable low-dose computed tomography (CT) denoising.<n>FoundDiff employs a two-stage strategy: (i) dose-anatomy perception and (ii) adaptive denoising.<n>First, we develop a dose- and anatomy-aware contrastive language image pre-training model (DA-CLIP) to achieve robust dose and anatomy perception.<n>Second, we design a dose- and anatomy-aware diffusion model (DA-Diff) to perform adaptive and generalizable denoising.
arXiv Detail & Related papers (2025-08-24T11:03:56Z) - Anatomy-Aware Low-Dose CT Denoising via Pretrained Vision Models and Semantic-Guided Contrastive Learning [12.975922919920393]
We propose ALDEN, an anatomy-aware LDCT denoising method that integrates semantic features of pretrained vision models with adversarial and contrastive learning.<n>Specifically, we introduce an anatomy-aware discriminator that dynamically fuses hierarchical semantic features from reference normal-dose CT (NDCT) via cross-attention mechanisms.<n>In addition, we propose a semantic-guided contrastive learning module that enforces anatomical consistency by contrasting PVM-derived features from LDCT, denoised CT and NDCT, preserving tissue-specific patterns through positive pairs and suppressing artifacts via dual negative pairs.
arXiv Detail & Related papers (2025-08-11T09:17:12Z) - Deep Few-view High-resolution Photon-counting CT at Halved Dose for Extremity Imaging [9.900942764883789]
We propose a deep learning-based approach for PCCT image reconstruction at halved dose and doubled speed validated in a New Zealand clinical trial.<n>Specifically, we design a patch-based volumetric refinement network to alleviate the GPU memory, train network with synthetic data, and use model-based iterative refinement to bridge the gap between synthetic and clinical data.
arXiv Detail & Related papers (2024-03-19T00:07:48Z) - WIA-LD2ND: Wavelet-based Image Alignment for Self-supervised Low-Dose CT Denoising [74.14134385961775]
We introduce a novel self-supervised CT image denoising method called WIA-LD2ND, only using NDCT data.
WIA-LD2ND comprises two modules: Wavelet-based Image Alignment (WIA) and Frequency-Aware Multi-scale Loss (FAM)
arXiv Detail & Related papers (2024-03-18T11:20:11Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - Feature-oriented Deep Learning Framework for Pulmonary Cone-beam CT
(CBCT) Enhancement with Multi-task Customized Perceptual Loss [9.59233136691378]
Cone-beam computed tomography (CBCT) is routinely collected during image-guided radiation therapy.
Recent deep learning-based CBCT enhancement methods have shown promising results in suppressing artifacts.
We propose a novel feature-oriented deep learning framework that translates low-quality CBCT images into high-quality CT-like imaging.
arXiv Detail & Related papers (2023-11-01T10:09:01Z) - SdCT-GAN: Reconstructing CT from Biplanar X-Rays with Self-driven
Generative Adversarial Networks [6.624839896733912]
This paper presents a new self-driven generative adversarial network model (SdCT-GAN) for reconstruction of 3D CT images.
It is motivated to pay more attention to image details by introducing a novel auto-encoder structure in the discriminator.
LPIPS evaluation metric is adopted that can quantitatively evaluate the fine contours and textures of reconstructed images better than the existing ones.
arXiv Detail & Related papers (2023-09-10T08:16:02Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Total-Body Low-Dose CT Image Denoising using Prior Knowledge Transfer
Technique with Contrastive Regularization Mechanism [4.998352078907441]
Low radiation dose may result in increased noise and artifacts, which greatly affected the clinical diagnosis.
To obtain high-quality Total-body Low-dose CT (LDCT) images, previous deep-learning-based research work has introduced various network architectures.
In this paper, we propose a novel intra-task knowledge transfer method that leverages the distilled knowledge from NDCT images.
arXiv Detail & Related papers (2021-12-01T06:46:38Z) - Cascaded Convolutional Neural Networks with Perceptual Loss for Low Dose
CT Denoising [0.0]
Low Dose CT Denoising research aims to reduce the risks of radiation exposure to patients.
Recent approaches that use mean-squared-error (MSE) tend to over smooth the image resulting in loss of fine structural details in low contrast regions of the image.
We show that our method outperforms related works and more effectively reconstructs fine structural details in low contrast regions of the image.
arXiv Detail & Related papers (2020-06-26T00:35:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.