Dynamic Scene Deblurring Base on Continuous Cross-Layer Attention
Transmission
- URL: http://arxiv.org/abs/2206.11476v1
- Date: Thu, 23 Jun 2022 04:55:13 GMT
- Title: Dynamic Scene Deblurring Base on Continuous Cross-Layer Attention
Transmission
- Authors: Xia Hua, Junxiong Fei, Mingxin Li, ZeZheng Li, Yu Shi, JiangGuo Liu
and Hanyu Hong
- Abstract summary: We introduce a new continuous cross-layer attention transmission (CCLAT) mechanism that can exploit hierarchical attention information from all the convolutional layers.
Taking RDAFB as the building block, we design an effective architecture for dynamic scene deblurring named RDAFNet.
Experiments on benchmark datasets show that the proposed model outperforms the state-of-the-art deblurring approaches.
- Score: 6.3482616879743885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The deep convolutional neural networks (CNNs) using attention mechanism have
achieved great success for dynamic scene deblurring. In most of these networks,
only the features refined by the attention maps can be passed to the next layer
and the attention maps of different layers are separated from each other, which
does not make full use of the attention information from different layers in
the CNN. To address this problem, we introduce a new continuous cross-layer
attention transmission (CCLAT) mechanism that can exploit hierarchical
attention information from all the convolutional layers. Based on the CCLAT
mechanism, we use a very simple attention module to construct a novel residual
dense attention fusion block (RDAFB). In RDAFB, the attention maps inferred
from the outputs of the preceding RDAFB and each layer are directly connected
to the subsequent ones, leading to a CRLAT mechanism. Taking RDAFB as the
building block, we design an effective architecture for dynamic scene
deblurring named RDAFNet. The experiments on benchmark datasets show that the
proposed model outperforms the state-of-the-art deblurring approaches, and
demonstrate the effectiveness of CCLAT mechanism. The source code is available
on: https://github.com/xjmz6/RDAFNet.
Related papers
- Strengthening Layer Interaction via Dynamic Layer Attention [12.341997220052486]
Existing layer attention methods achieve layer interaction on fixed feature maps in a static manner.
To restore the dynamic context representation capability of the attention mechanism, we propose a Dynamic Layer Attention architecture.
Experimental results demonstrate the effectiveness of the proposed DLA architecture, outperforming other state-of-the-art methods in image recognition and object detection tasks.
arXiv Detail & Related papers (2024-06-19T09:35:14Z) - TAME: Attention Mechanism Based Feature Fusion for Generating
Explanation Maps of Convolutional Neural Networks [8.395400675921515]
TAME (Trainable Attention Mechanism for Explanations) is a method for generating explanation maps with a multi-branch hierarchical attention mechanism.
TAME can easily be applied to any convolutional neural network (CNN) by streamlining the optimization of the attention mechanism's training method.
arXiv Detail & Related papers (2023-01-18T10:05:28Z) - A Generic Shared Attention Mechanism for Various Backbone Neural Networks [53.36677373145012]
Self-attention modules (SAMs) produce strongly correlated attention maps across different layers.
Dense-and-Implicit Attention (DIA) shares SAMs across layers and employs a long short-term memory module.
Our simple yet effective DIA can consistently enhance various network backbones.
arXiv Detail & Related papers (2022-10-27T13:24:08Z) - Self-Supervised Implicit Attention: Guided Attention by The Model Itself [1.3406858660972554]
We propose Self-Supervised Implicit Attention (SSIA), a new approach that adaptively guides deep neural network models to gain attention by exploiting the properties of the models themselves.
SSIAA is a novel attention mechanism that does not require any extra parameters, computation, or memory access costs during inference.
Our implementation will be available on GitHub.
arXiv Detail & Related papers (2022-06-15T10:13:34Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Attention-Guided Network for Iris Presentation Attack Detection [13.875545441867137]
We propose attention-guided iris presentation attack detection (AG-PAD) to augment CNNs with attention mechanisms.
Experiments involving both a JHU-APL proprietary dataset and the benchmark LivDet-Iris-2017 dataset suggest that the proposed method achieves promising results.
arXiv Detail & Related papers (2020-10-23T19:23:51Z) - Single Image Super-Resolution via a Holistic Attention Network [87.42409213909269]
We propose a new holistic attention network (HAN) to model the holistic interdependencies among layers, channels, and positions.
The proposed HAN adaptively emphasizes hierarchical features by considering correlations among layers.
Experiments demonstrate that the proposed HAN performs favorably against the state-of-the-art single image super-resolution approaches.
arXiv Detail & Related papers (2020-08-20T04:13:15Z) - GCN for HIN via Implicit Utilization of Attention and Meta-paths [104.24467864133942]
Heterogeneous information network (HIN) embedding aims to map the structure and semantic information in a HIN to distributed representations.
We propose a novel neural network method via implicitly utilizing attention and meta-paths.
We first use the multi-layer graph convolutional network (GCN) framework, which performs a discriminative aggregation at each layer.
We then give an effective relaxation and improvement via introducing a new propagation operation which can be separated from aggregation.
arXiv Detail & Related papers (2020-07-06T11:09:40Z) - Weakly Supervised Attention Pyramid Convolutional Neural Network for
Fine-Grained Visual Classification [71.96618723152487]
We introduce Attention Pyramid Convolutional Neural Network (AP-CNN)
AP-CNN learns both high-level semantic and low-level detailed feature representation.
It can be trained end-to-end, without the need of additional bounding box/part annotations.
arXiv Detail & Related papers (2020-02-09T12:33:23Z) - Hybrid Multiple Attention Network for Semantic Segmentation in Aerial
Images [24.35779077001839]
We propose a novel attention-based framework named Hybrid Multiple Attention Network (HMANet) to adaptively capture global correlations.
We introduce a simple yet effective region shuffle attention (RSA) module to reduce feature redundant and improve the efficiency of self-attention mechanism.
arXiv Detail & Related papers (2020-01-09T07:47:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.