Eformer: Edge Enhancement based Transformer for Medical Image Denoising
- URL: http://arxiv.org/abs/2109.08044v1
- Date: Thu, 16 Sep 2021 15:18:21 GMT
- Title: Eformer: Edge Enhancement based Transformer for Medical Image Denoising
- Authors: Achleshwar Luthra, Harsh Sulakhe, Tanish Mittal, Abhishek Iyer,
Santosh Yadav
- Abstract summary: We present Eformer - Edge enhancement based transformer, a novel architecture that builds an encoder-decoder network.
Non-overlapping window-based self-attention is used in the transformer block that reduces computational requirements.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we present Eformer - Edge enhancement based transformer, a
novel architecture that builds an encoder-decoder network using transformer
blocks for medical image denoising. Non-overlapping window-based self-attention
is used in the transformer block that reduces computational requirements. This
work further incorporates learnable Sobel-Feldman operators to enhance edges in
the image and propose an effective way to concatenate them in the intermediate
layers of our architecture. The experimental analysis is conducted by comparing
deterministic learning and residual learning for the task of medical image
denoising. To defend the effectiveness of our approach, our model is evaluated
on the AAPM-Mayo Clinic Low-Dose CT Grand Challenge Dataset and achieves
state-of-the-art performance, $i.e.$, 43.487 PSNR, 0.0067 RMSE, and 0.9861
SSIM. We believe that our work will encourage more research in
transformer-based architectures for medical image denoising using residual
learning.
Related papers
- Advancing Medical Image Segmentation: Morphology-Driven Learning with Diffusion Transformer [4.672688418357066]
We propose a novel Transformer Diffusion (DTS) model for robust segmentation in the presence of noise.
Our model, which analyzes the morphological representation of images, shows better results than the previous models in various medical imaging modalities.
arXiv Detail & Related papers (2024-08-01T07:35:54Z) - A cross Transformer for image denoising [83.68175077524111]
We propose a cross Transformer denoising CNN (CTNet) with a serial block (SB), a parallel block (PB), and a residual block (RB)
CTNet is superior to some popular denoising methods in terms of real and synthetic image denoising.
arXiv Detail & Related papers (2023-10-16T13:53:19Z) - HAT: Hybrid Attention Transformer for Image Restoration [61.74223315807691]
Transformer-based methods have shown impressive performance in image restoration tasks, such as image super-resolution and denoising.
We propose a new Hybrid Attention Transformer (HAT) to activate more input pixels for better restoration.
Our HAT achieves state-of-the-art performance both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-09-11T05:17:55Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Masked Autoencoders for Low dose CT denoising [9.575051352192697]
Masked autoencoders (MAE) have been proposed as an effective label-free self-pretraining method for transformers.
We redesign the classical encoder-decoder learning model to match the denoising task and apply it to LDCT denoising problem.
arXiv Detail & Related papers (2022-10-10T18:27:58Z) - Multi-stage image denoising with the wavelet transform [125.2251438120701]
Deep convolutional neural networks (CNNs) are used for image denoising via automatically mining accurate structure information.
We propose a multi-stage image denoising CNN with the wavelet transform (MWDCNN) via three stages, i.e., a dynamic convolutional block (DCB), two cascaded wavelet transform and enhancement blocks (WEBs) and residual block (RB)
arXiv Detail & Related papers (2022-09-26T03:28:23Z) - Practical Blind Image Denoising via Swin-Conv-UNet and Data Synthesis [148.16279746287452]
We propose a swin-conv block to incorporate the local modeling ability of residual convolutional layer and non-local modeling ability of swin transformer block.
For the training data synthesis, we design a practical noise degradation model which takes into consideration different kinds of noise.
Experiments on AGWN removal and real image denoising demonstrate that the new network architecture design achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-03-24T18:11:31Z) - TED-net: Convolution-free T2T Vision Transformer-based Encoder-decoder
Dilation network for Low-dose CT Denoising [5.2227817530931535]
We propose a convolution-free vision transformer-based-decoder Dilation net-work (TED-net) to enrich the family of LDCT denoising algorithms.
Our model is evaluated on the AAPM-Mayo clinic LDCT Grand Challenge dataset, and results show outperformance over the state-of-the-art denoising methods.
arXiv Detail & Related papers (2021-06-08T19:26:55Z) - Blind microscopy image denoising with a deep residual and multiscale
encoder/decoder network [0.0]
Deep multiscale convolutional encoder-decoder neural network is proposed.
The proposed model reaches on average 38.38 of PSNR and 0.98 of SSIM on a test set of 57458 images.
arXiv Detail & Related papers (2021-05-01T14:54:57Z) - Medical Transformer: Gated Axial-Attention for Medical Image
Segmentation [73.98974074534497]
We study the feasibility of using Transformer-based network architectures for medical image segmentation tasks.
We propose a Gated Axial-Attention model which extends the existing architectures by introducing an additional control mechanism in the self-attention module.
To train the model effectively on medical images, we propose a Local-Global training strategy (LoGo) which further improves the performance.
arXiv Detail & Related papers (2021-02-21T18:35:14Z) - EDCNN: Edge enhancement-based Densely Connected Network with Compound
Loss for Low-Dose CT Denoising [27.86840312836051]
We propose the Edge enhancement based Densely connected Convolutional Neural Network (EDCNN)
We construct a model with dense connections to fuse the extracted edge information and realize end-to-end image denoising.
Compared with the existing low-dose CT image denoising algorithms, our proposed model has a better performance in preserving details and suppressing noise.
arXiv Detail & Related papers (2020-10-30T23:12:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.