CTformer: Convolution-free Token2Token Dilated Vision Transformer for
Low-dose CT Denoising
- URL: http://arxiv.org/abs/2202.13517v1
- Date: Mon, 28 Feb 2022 02:58:16 GMT
- Title: CTformer: Convolution-free Token2Token Dilated Vision Transformer for
Low-dose CT Denoising
- Authors: Dayang Wang, Fenglei Fan, Zhan Wu, Rui Liu, Fei Wang, Hengyong Yu
- Abstract summary: Low-dose computed tomography (LDCT) denoising is an important problem in CT research.
vision transformers have shown superior feature representation ability over convolutional neural networks (CNNs)
We propose a Convolution-free Token2Token Dilated Vision Transformer for low-dose CT denoising.
- Score: 11.67382017798666
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low-dose computed tomography (LDCT) denoising is an important problem in CT
research. Compared to the normal dose CT (NDCT), LDCT images are subjected to
severe noise and artifacts. Recently in many studies, vision transformers have
shown superior feature representation ability over convolutional neural
networks (CNNs). However, unlike CNNs, the potential of vision transformers in
LDCT denoising was little explored so far. To fill this gap, we propose a
Convolution-free Token2Token Dilated Vision Transformer for low-dose CT
denoising. The CTformer uses a more powerful token rearrangement to encompass
local contextual information and thus avoids convolution. It also dilates and
shifts feature maps to capture longer-range interaction. We interpret the
CTformer by statically inspecting patterns of its internal attention maps and
dynamically tracing the hierarchical attention flow with an explanatory graph.
Furthermore, an overlapped inference mechanism is introduced to effectively
eliminate the boundary artifacts that are common for encoder-decoder-based
denoising models. Experimental results on Mayo LDCT dataset suggest that the
CTformer outperforms the state-of-the-art denoising methods with a low
computation overhead.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.