Learning Image Deraining Transformer Network with Dynamic Dual
Self-Attention
- URL: http://arxiv.org/abs/2308.07781v1
- Date: Tue, 15 Aug 2023 13:59:47 GMT
- Title: Learning Image Deraining Transformer Network with Dynamic Dual
Self-Attention
- Authors: Zhentao Fan, Hongming Chen, Yufeng Li
- Abstract summary: This paper proposes an effective image deraining Transformer with dynamic dual self-attention (DDSA)
Specifically, we only select the most useful similarity values based on top-k approximate calculation to achieve sparse attention.
In addition, we also develop a novel spatial-enhanced feed-forward network (SEFN) to further obtain a more accurate representation for achieving high-quality derained results.
- Score: 46.11162082219387
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Transformer-based architecture has been introduced into single
image deraining task due to its advantage in modeling non-local information.
However, existing approaches tend to integrate global features based on a dense
self-attention strategy since it tend to uses all similarities of the tokens
between the queries and keys. In fact, this strategy leads to ignoring the most
relevant information and inducing blurry effect by the irrelevant
representations during the feature aggregation. To this end, this paper
proposes an effective image deraining Transformer with dynamic dual
self-attention (DDSA), which combines both dense and sparse attention
strategies to better facilitate clear image reconstruction. Specifically, we
only select the most useful similarity values based on top-k approximate
calculation to achieve sparse attention. In addition, we also develop a novel
spatial-enhanced feed-forward network (SEFN) to further obtain a more accurate
representation for achieving high-quality derained results. Extensive
experiments on benchmark datasets demonstrate the effectiveness of our proposed
method.
Related papers
- WTDUN: Wavelet Tree-Structured Sampling and Deep Unfolding Network for Image Compressed Sensing [51.94493817128006]
We propose a novel wavelet-domain deep unfolding framework named WTDUN, which operates directly on the multi-scale wavelet subbands.
Our method utilizes the intrinsic sparsity and multi-scale structure of wavelet coefficients to achieve a tree-structured sampling and reconstruction.
arXiv Detail & Related papers (2024-11-25T12:31:03Z) - IPT-V2: Efficient Image Processing Transformer using Hierarchical Attentions [26.09373405194564]
We present an efficient image processing transformer architecture with hierarchical attentions, called IPTV2.
We adopt a focal context self-attention (FCSA) and a global grid self-attention (GGSA) to obtain adequate token interactions in local and global receptive fields.
Our proposed IPT-V2 achieves state-of-the-art results on various image processing tasks, covering denoising, deblurring, deraining and obtains much better trade-off for performance and computational complexity than previous methods.
arXiv Detail & Related papers (2024-03-31T10:01:20Z) - HAT: Hybrid Attention Transformer for Image Restoration [61.74223315807691]
Transformer-based methods have shown impressive performance in image restoration tasks, such as image super-resolution and denoising.
We propose a new Hybrid Attention Transformer (HAT) to activate more input pixels for better restoration.
Our HAT achieves state-of-the-art performance both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-09-11T05:17:55Z) - Remote Sensing Image Change Detection with Graph Interaction [1.8579693774597708]
We propose a bitemporal image graph Interaction network for remote sensing change detection, namely BGINet-CD.
Our model demonstrates superior performance compared to other state-of-the-art methods (SOTA) on the GZ CD dataset.
arXiv Detail & Related papers (2023-07-05T03:32:49Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - Learning A Sparse Transformer Network for Effective Image Deraining [42.01684644627124]
We propose an effective DeRaining network, Sparse Transformer (DRSformer)
We develop a learnable top-k selection operator to adaptively retain the most crucial attention scores from the keys for each query for better feature aggregation.
We equip our model with mixture of experts feature compensator to present a cooperation refinement deraining scheme.
arXiv Detail & Related papers (2023-03-21T15:41:57Z) - Semantic Labeling of High Resolution Images Using EfficientUNets and
Transformers [5.177947445379688]
We propose a new segmentation model that combines convolutional neural networks with deep transformers.
Our results demonstrate that the proposed methodology improves segmentation accuracy compared to state-of-the-art techniques.
arXiv Detail & Related papers (2022-06-20T12:03:54Z) - Activating More Pixels in Image Super-Resolution Transformer [53.87533738125943]
Transformer-based methods have shown impressive performance in low-level vision tasks, such as image super-resolution.
We propose a novel Hybrid Attention Transformer (HAT) to activate more input pixels for better reconstruction.
Our overall method significantly outperforms the state-of-the-art methods by more than 1dB.
arXiv Detail & Related papers (2022-05-09T17:36:58Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.