ASCON: Anatomy-aware Supervised Contrastive Learning Framework for
Low-dose CT Denoising
- URL: http://arxiv.org/abs/2307.12225v1
- Date: Sun, 23 Jul 2023 04:36:05 GMT
- Title: ASCON: Anatomy-aware Supervised Contrastive Learning Framework for
Low-dose CT Denoising
- Authors: Zhihao Chen, Qi Gao, Yi Zhang, Hongming Shan
- Abstract summary: We propose a novel Anatomy-aware Supervised CONtrastive learning framework, termed ASCON, to explore the anatomical semantics for low-dose CT denoising.
Our ASCON provides anatomical interpretability for low-dose CT denoising for the first time.
- Score: 23.274928463320986
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: While various deep learning methods have been proposed for low-dose computed
tomography (CT) denoising, most of them leverage the normal-dose CT images as
the ground-truth to supervise the denoising process. These methods typically
ignore the inherent correlation within a single CT image, especially the
anatomical semantics of human tissues, and lack the interpretability on the
denoising process. In this paper, we propose a novel Anatomy-aware Supervised
CONtrastive learning framework, termed ASCON, which can explore the anatomical
semantics for low-dose CT denoising while providing anatomical
interpretability. The proposed ASCON consists of two novel designs: an
efficient self-attention-based U-Net (ESAU-Net) and a multi-scale anatomical
contrastive network (MAC-Net). First, to better capture global-local
interactions and adapt to the high-resolution input, an efficient ESAU-Net is
introduced by using a channel-wise self-attention mechanism. Second, MAC-Net
incorporates a patch-wise non-contrastive module to capture inherent anatomical
information and a pixel-wise contrastive module to maintain intrinsic
anatomical consistency. Extensive experimental results on two public low-dose
CT denoising datasets demonstrate superior performance of ASCON over
state-of-the-art models. Remarkably, our ASCON provides anatomical
interpretability for low-dose CT denoising for the first time. Source code is
available at https://github.com/hao1635/ASCON.
Related papers
- WIA-LD2ND: Wavelet-based Image Alignment for Self-supervised Low-Dose CT Denoising [74.14134385961775]
We introduce a novel self-supervised CT image denoising method called WIA-LD2ND, only using NDCT data.
WIA-LD2ND comprises two modules: Wavelet-based Image Alignment (WIA) and Frequency-Aware Multi-scale Loss (FAM)
arXiv Detail & Related papers (2024-03-18T11:20:11Z) - Low-dose CT Denoising with Language-engaged Dual-space Alignment [21.172319554618497]
We propose a plug-and-play Language-Engaged Dual-space Alignment loss (LEDA) to optimize low-dose CT denoising models.
Our idea is to leverage large language models (LLMs) to align denoised CT and normal dose CT images in both the continuous perceptual space and discrete semantic space.
LEDA involves two steps: the first is to pretrain an LLM-guided CT autoencoder, which can encode a CT image into continuous high-level features and quantize them into a token space to produce semantic tokens.
arXiv Detail & Related papers (2024-03-10T08:21:50Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - CTformer: Convolution-free Token2Token Dilated Vision Transformer for
Low-dose CT Denoising [11.67382017798666]
Low-dose computed tomography (LDCT) denoising is an important problem in CT research.
vision transformers have shown superior feature representation ability over convolutional neural networks (CNNs)
We propose a Convolution-free Token2Token Dilated Vision Transformer for low-dose CT denoising.
arXiv Detail & Related papers (2022-02-28T02:58:16Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - Total-Body Low-Dose CT Image Denoising using Prior Knowledge Transfer
Technique with Contrastive Regularization Mechanism [4.998352078907441]
Low radiation dose may result in increased noise and artifacts, which greatly affected the clinical diagnosis.
To obtain high-quality Total-body Low-dose CT (LDCT) images, previous deep-learning-based research work has introduced various network architectures.
In this paper, we propose a novel intra-task knowledge transfer method that leverages the distilled knowledge from NDCT images.
arXiv Detail & Related papers (2021-12-01T06:46:38Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Symmetry-Enhanced Attention Network for Acute Ischemic Infarct
Segmentation with Non-Contrast CT Images [50.55978219682419]
We propose a symmetry enhanced attention network (SEAN) for acute ischemic infarct segmentation.
Our proposed network automatically transforms an input CT image into the standard space where the brain tissue is bilaterally symmetric.
The proposed SEAN outperforms some symmetry-based state-of-the-art methods in terms of both dice coefficient and infarct localization.
arXiv Detail & Related papers (2021-10-11T07:13:26Z) - DU-GAN: Generative Adversarial Networks with Dual-Domain U-Net Based
Discriminators for Low-Dose CT Denoising [22.351540738281265]
Deep learning techniques have been introduced to improve the image quality of LDCT images through denoising.
This paper proposes a novel method, termed DU-GAN, which leverages U-Net based discriminators in the GANs framework to learn both global and local difference between the denoised and normal-dose images.
arXiv Detail & Related papers (2021-08-24T14:37:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.