SUMD: Super U-shaped Matrix Decomposition Convolutional neural network
for Image denoising
- URL: http://arxiv.org/abs/2204.04861v1
- Date: Mon, 11 Apr 2022 04:38:34 GMT
- Title: SUMD: Super U-shaped Matrix Decomposition Convolutional neural network
for Image denoising
- Authors: QiFan Li
- Abstract summary: We introduce the matrix decomposition module(MD) in the network to establish the global context feature.
Inspired by the design of multi-stage progressive restoration of U-shaped architecture, we further integrate the MD module into the multi-branches.
Our model(SUMD) can produce comparable visual quality and accuracy results with Transformer-based methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a novel and efficient CNN-based framework that
leverages local and global context information for image denoising. Due to the
limitations of convolution itself, the CNN-based method is generally unable to
construct an effective and structured global feature representation, usually
called the long-distance dependencies in the Transformer-based method. To
tackle this problem, we introduce the matrix decomposition module(MD) in the
network to establish the global context feature, comparable to the Transformer
based method performance. Inspired by the design of multi-stage progressive
restoration of U-shaped architecture, we further integrate the MD module into
the multi-branches to acquire the relative global feature representation of the
patch range at the current stage. Then, the stage input gradually rises to the
overall scope and continuously improves the final feature. Experimental results
on various image denoising datasets: SIDD, DND, and synthetic Gaussian noise
datasets show that our model(SUMD) can produce comparable visual quality and
accuracy results with Transformer-based methods.
Related papers
- Mesh Denoising Transformer [104.5404564075393]
Mesh denoising is aimed at removing noise from input meshes while preserving their feature structures.
SurfaceFormer is a pioneering Transformer-based mesh denoising framework.
New representation known as Local Surface Descriptor captures local geometric intricacies.
Denoising Transformer module receives the multimodal information and achieves efficient global feature aggregation.
arXiv Detail & Related papers (2024-05-10T15:27:43Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Local-Global Transformer Enhanced Unfolding Network for Pan-sharpening [13.593522290577512]
Pan-sharpening aims to increase the spatial resolution of the low-resolution multispectral (LrMS) image with the guidance of the corresponding panchromatic (PAN) image.
Although deep learning (DL)-based pan-sharpening methods have achieved promising performance, most of them have a two-fold deficiency.
arXiv Detail & Related papers (2023-04-28T03:34:36Z) - Magic ELF: Image Deraining Meets Association Learning and Transformer [63.761812092934576]
This paper aims to unify CNN and Transformer to take advantage of their learning merits for image deraining.
A novel multi-input attention module (MAM) is proposed to associate rain removal and background recovery.
Our proposed method (dubbed as ELF) outperforms the state-of-the-art approach (MPRNet) by 0.25 dB on average.
arXiv Detail & Related papers (2022-07-21T12:50:54Z) - Semantic Image Synthesis via Diffusion Models [159.4285444680301]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the emphde facto Generative Adversarial Nets (GANs)
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - Dense residual Transformer for image denoising [7.232516946005627]
Image denoising is an important low-level computer vision task, which aims to reconstruct a noise-free and high-quality image from a noisy image.
We propose an image denoising network structure based on Transformer, which is named DenSformer.
arXiv Detail & Related papers (2022-05-14T01:59:38Z) - A training-free recursive multiresolution framework for diffeomorphic
deformable image registration [6.929709872589039]
We propose a novel diffeomorphic training-free approach for deformable image registration.
The proposed architecture is simple in design. The moving image is warped successively at each resolution and finally aligned to the fixed image.
The entire system is end-to-end and optimized for each pair of images from scratch.
arXiv Detail & Related papers (2022-02-01T15:17:17Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - Rethinking Global Context in Crowd Counting [70.54184500538338]
A pure transformer is used to extract features with global information from overlapping image patches.
Inspired by classification, we add a context token to the input sequence, to facilitate information exchange with tokens corresponding to image patches.
arXiv Detail & Related papers (2021-05-23T12:44:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.