TED-net: Convolution-free T2T Vision Transformer-based Encoder-decoder
Dilation network for Low-dose CT Denoising
- URL: http://arxiv.org/abs/2106.04650v1
- Date: Tue, 8 Jun 2021 19:26:55 GMT
- Title: TED-net: Convolution-free T2T Vision Transformer-based Encoder-decoder
Dilation network for Low-dose CT Denoising
- Authors: Dayang Wang, Zhan Wu, Hengyong Yu
- Abstract summary: We propose a convolution-free vision transformer-based-decoder Dilation net-work (TED-net) to enrich the family of LDCT denoising algorithms.
Our model is evaluated on the AAPM-Mayo clinic LDCT Grand Challenge dataset, and results show outperformance over the state-of-the-art denoising methods.
- Score: 5.2227817530931535
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low dose computed tomography is a mainstream for clinical applications.
How-ever, compared to normal dose CT, in the low dose CT (LDCT) images, there
are stronger noise and more artifacts which are obstacles for practical
applications. In the last few years, convolution-based end-to-end deep learning
methods have been widely used for LDCT image denoising. Recently, transformer
has shown superior performance over convolution with more feature interactions.
Yet its ap-plications in LDCT denoising have not been fully cultivated. Here,
we propose a convolution-free T2T vision transformer-based Encoder-decoder
Dilation net-work (TED-net) to enrich the family of LDCT denoising algorithms.
The model is free of convolution blocks and consists of a symmetric
encoder-decoder block with sole transformer. Our model is evaluated on the
AAPM-Mayo clinic LDCT Grand Challenge dataset, and results show outperformance
over the state-of-the-art denoising methods.
Related papers
- WIA-LD2ND: Wavelet-based Image Alignment for Self-supervised Low-Dose CT Denoising [74.14134385961775]
We introduce a novel self-supervised CT image denoising method called WIA-LD2ND, only using NDCT data.
WIA-LD2ND comprises two modules: Wavelet-based Image Alignment (WIA) and Frequency-Aware Multi-scale Loss (FAM)
arXiv Detail & Related papers (2024-03-18T11:20:11Z) - Low-dose CT Denoising with Language-engaged Dual-space Alignment [21.172319554618497]
We propose a plug-and-play Language-Engaged Dual-space Alignment loss (LEDA) to optimize low-dose CT denoising models.
Our idea is to leverage large language models (LLMs) to align denoised CT and normal dose CT images in both the continuous perceptual space and discrete semantic space.
LEDA involves two steps: the first is to pretrain an LLM-guided CT autoencoder, which can encode a CT image into continuous high-level features and quantize them into a token space to produce semantic tokens.
arXiv Detail & Related papers (2024-03-10T08:21:50Z) - Gadolinium dose reduction for brain MRI using conditional deep learning [66.99830668082234]
Two main challenges for these approaches are the accurate prediction of contrast enhancement and the synthesis of realistic images.
We address both challenges by utilizing the contrast signal encoded in the subtraction images of pre-contrast and post-contrast image pairs.
We demonstrate the effectiveness of our approach on synthetic and real datasets using various scanners, field strengths, and contrast agents.
arXiv Detail & Related papers (2024-03-06T08:35:29Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Self Supervised Low Dose Computed Tomography Image Denoising Using
Invertible Network Exploiting Inter Slice Congruence [20.965610734723636]
This study proposes a novel method for self-supervised low-dose CT denoising to alleviate the requirement of paired LDCT and NDCT images.
We have trained an invertible neural network to minimize the pixel-based mean square distance between a noisy slice and the average of its two immediate adjacent noisy slices.
arXiv Detail & Related papers (2022-11-03T07:16:18Z) - Masked Autoencoders for Low dose CT denoising [9.575051352192697]
Masked autoencoders (MAE) have been proposed as an effective label-free self-pretraining method for transformers.
We redesign the classical encoder-decoder learning model to match the denoising task and apply it to LDCT denoising problem.
arXiv Detail & Related papers (2022-10-10T18:27:58Z) - Multi-Channel Convolutional Analysis Operator Learning for Dual-Energy
CT Reconstruction [108.06731611196291]
We develop a multi-channel convolutional analysis operator learning (MCAOL) method to exploit common spatial features within attenuation images at different energies.
We propose an optimization method which jointly reconstructs the attenuation images at low and high energies with a mixed norm regularization on the sparse features.
arXiv Detail & Related papers (2022-03-10T14:22:54Z) - CTformer: Convolution-free Token2Token Dilated Vision Transformer for
Low-dose CT Denoising [11.67382017798666]
Low-dose computed tomography (LDCT) denoising is an important problem in CT research.
vision transformers have shown superior feature representation ability over convolutional neural networks (CNNs)
We propose a Convolution-free Token2Token Dilated Vision Transformer for low-dose CT denoising.
arXiv Detail & Related papers (2022-02-28T02:58:16Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.