Cross-Layer Retrospective Retrieving via Layer Attention
- URL: http://arxiv.org/abs/2302.03985v3
- Date: Fri, 10 Feb 2023 04:08:25 GMT
- Title: Cross-Layer Retrospective Retrieving via Layer Attention
- Authors: Yanwen Fang, Yuxi Cai, Jintai Chen, Jingyu Zhao, Guangjian Tian,
Guodong Li
- Abstract summary: We devise a cross-layer attention mechanism called multi-head recurrent layer attention (MRLA)
MRLA sends a query representation of the current layer to all previous layers to retrieve query-related information from different levels of receptive fields.
Our MRLA can improve 1.6% Top-1 accuracy on ResNet-50, while introducing only 0.16M parameters and 0.07B FLOPs.
- Score: 12.423426718300151
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: More and more evidence has shown that strengthening layer interactions can
enhance the representation power of a deep neural network, while self-attention
excels at learning interdependencies by retrieving query-activated information.
Motivated by this, we devise a cross-layer attention mechanism, called
multi-head recurrent layer attention (MRLA), that sends a query representation
of the current layer to all previous layers to retrieve query-related
information from different levels of receptive fields. A light-weighted version
of MRLA is also proposed to reduce the quadratic computation cost. The proposed
layer attention mechanism can enrich the representation power of many
state-of-the-art vision networks, including CNNs and vision transformers. Its
effectiveness has been extensively evaluated in image classification, object
detection and instance segmentation tasks, where improvements can be
consistently observed. For example, our MRLA can improve 1.6% Top-1 accuracy on
ResNet-50, while only introducing 0.16M parameters and 0.07B FLOPs.
Surprisingly, it can boost the performances by a large margin of 3-4% box AP
and mask AP in dense prediction tasks. Our code is available at
https://github.com/joyfang1106/MRLA.
Related papers
- Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - Efficient Deep Spiking Multi-Layer Perceptrons with Multiplication-Free Inference [13.924924047051782]
Deep convolution architectures for Spiking Neural Networks (SNNs) have significantly enhanced image classification performance and reduced computational burdens.
This research explores a new pathway, drawing inspiration from the progress made in Multi-Layer Perceptrons (MLPs)
We propose an innovative spiking architecture that uses batch normalization to retain MFI compatibility.
We establish an efficient multi-stage spiking network that blends effectively global receptive fields with local feature extraction.
arXiv Detail & Related papers (2023-06-21T16:52:20Z) - Systematic Architectural Design of Scale Transformed Attention Condenser
DNNs via Multi-Scale Class Representational Response Similarity Analysis [93.0013343535411]
We propose a novel type of analysis called Multi-Scale Class Representational Response Similarity Analysis (ClassRepSim)
We show that adding STAC modules to ResNet style architectures can result in up to a 1.6% increase in top-1 accuracy.
Results from ClassRepSim analysis can be used to select an effective parameterization of the STAC module resulting in competitive performance.
arXiv Detail & Related papers (2023-06-16T18:29:26Z) - Kernel function impact on convolutional neural networks [10.98068123467568]
We study the usage of kernel functions at the different layers in a convolutional neural network.
We show how one can effectively leverage kernel functions, by introducing a more distortion aware pooling layers.
We propose Kernelized Dense Layers (KDL), which replace fully-connected layers.
arXiv Detail & Related papers (2023-02-20T19:57:01Z) - A Generic Shared Attention Mechanism for Various Backbone Neural Networks [53.36677373145012]
Self-attention modules (SAMs) produce strongly correlated attention maps across different layers.
Dense-and-Implicit Attention (DIA) shares SAMs across layers and employs a long short-term memory module.
Our simple yet effective DIA can consistently enhance various network backbones.
arXiv Detail & Related papers (2022-10-27T13:24:08Z) - Learning Target-aware Representation for Visual Tracking via Informative
Interactions [49.552877881662475]
We introduce a novel backbone architecture to improve target-perception ability of feature representation for tracking.
The proposed GIM module and InBN mechanism are general and applicable to different backbone types including CNN and Transformer.
arXiv Detail & Related papers (2022-01-07T16:22:27Z) - Recurrence along Depth: Deep Convolutional Neural Networks with
Recurrent Layer Aggregation [5.71305698739856]
This paper introduces a concept of layer aggregation to describe how information from previous layers can be reused to better extract features at the current layer.
We propose a very light-weighted module, called recurrent layer aggregation (RLA), by making use of the sequential structure of layers in a deep CNN.
Our RLA module is compatible with many mainstream deep CNNs, including ResNets, Xception and MobileNetV2.
arXiv Detail & Related papers (2021-10-22T15:36:33Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z) - Learning Deep Interleaved Networks with Asymmetric Co-Attention for
Image Restoration [65.11022516031463]
We present a deep interleaved network (DIN) that learns how information at different states should be combined for high-quality (HQ) images reconstruction.
In this paper, we propose asymmetric co-attention (AsyCA) which is attached at each interleaved node to model the feature dependencies.
Our presented DIN can be trained end-to-end and applied to various image restoration tasks.
arXiv Detail & Related papers (2020-10-29T15:32:00Z) - MPG-Net: Multi-Prediction Guided Network for Segmentation of Retinal
Layers in OCT Images [11.370735571629602]
We propose a novel multiprediction guided attention network (MPG-Net) for automated retinal layer segmentation in OCT images.
MPG-Net consists of two major steps to strengthen the discriminative power of a U-shape Fully convolutional network (FCN) for reliable automated segmentation.
arXiv Detail & Related papers (2020-09-28T21:22:22Z) - Hybrid Multiple Attention Network for Semantic Segmentation in Aerial
Images [24.35779077001839]
We propose a novel attention-based framework named Hybrid Multiple Attention Network (HMANet) to adaptively capture global correlations.
We introduce a simple yet effective region shuffle attention (RSA) module to reduce feature redundant and improve the efficiency of self-attention mechanism.
arXiv Detail & Related papers (2020-01-09T07:47:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.