EMC2A-Net: An Efficient Multibranch Cross-channel Attention Network for
SAR Target Classification
- URL: http://arxiv.org/abs/2208.01836v1
- Date: Wed, 3 Aug 2022 04:31:52 GMT
- Title: EMC2A-Net: An Efficient Multibranch Cross-channel Attention Network for
SAR Target Classification
- Authors: Xiang Yu, Zhe Geng, Xiaohua Huang, Qinglu Wang and Daiyin Zhu
- Abstract summary: This paper proposed two residual blocks, namely EMC2A blocks with multiscale receptive fields(RFs), based on a multibranch structure and then designed an efficient isotopic architecture deep CNN (DCNN), EMC2A-Net.
EMC2A blocks utilize parallel dilated convolution with different dilation rates, which can effectively capture multiscale context features without significantly increasing the computational burden.
This paper proposed a multiscale feature cross-channel attention module, namely the EMC2A module, adopting a local multiscale feature interaction strategy without dimensionality reduction.
- Score: 10.479559839534033
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, convolutional neural networks (CNNs) have shown great
potential in synthetic aperture radar (SAR) target recognition. SAR images have
a strong sense of granularity and have different scales of texture features,
such as speckle noise, target dominant scatterers and target contours, which
are rarely considered in the traditional CNN model. This paper proposed two
residual blocks, namely EMC2A blocks with multiscale receptive fields(RFs),
based on a multibranch structure and then designed an efficient isotopic
architecture deep CNN (DCNN), EMC2A-Net. EMC2A blocks utilize parallel dilated
convolution with different dilation rates, which can effectively capture
multiscale context features without significantly increasing the computational
burden. To further improve the efficiency of multiscale feature fusion, this
paper proposed a multiscale feature cross-channel attention module, namely the
EMC2A module, adopting a local multiscale feature interaction strategy without
dimensionality reduction. This strategy adaptively adjusts the weights of each
channel through efficient one-dimensional (1D)-circular convolution and sigmoid
function to guide attention at the global channel wise level. The comparative
results on the MSTAR dataset show that EMC2A-Net outperforms the existing
available models of the same type and has relatively lightweight network
structure. The ablation experiment results show that the EMC2A module
significantly improves the performance of the model by using only a few
parameters and appropriate cross-channel interactions.
Related papers
- iiANET: Inception Inspired Attention Hybrid Network for efficient Long-Range Dependency [0.0]
We introduce iiANET (Inception Inspired Attention Network), an efficient hybrid model designed to capture long-range dependencies in complex images.
The fundamental building block, iiABlock, integrates global 2D-MHSA (Multi-Head Self-Attention) with Registers, MBConv2 (MobileNetV2-based convolution), and dilated convolution in parallel.
We serially integrate an ECANET (Efficient Channel Attention Network) at the end of each iiABlock to calibrate channel-wise attention for enhanced model performance.
arXiv Detail & Related papers (2024-07-10T12:39:02Z) - DA-Flow: Dual Attention Normalizing Flow for Skeleton-based Video Anomaly Detection [52.74152717667157]
We propose a lightweight module called Dual Attention Module (DAM) for capturing cross-dimension interaction relationships in-temporal skeletal data.
It employs the frame attention mechanism to identify the most significant frames and the skeleton attention mechanism to capture broader relationships across fixed partitions with minimal parameters and flops.
arXiv Detail & Related papers (2024-06-05T06:18:03Z) - A-SDM: Accelerating Stable Diffusion through Model Assembly and Feature Inheritance Strategies [51.7643024367548]
Stable Diffusion Model is a prevalent and effective model for text-to-image (T2I) and image-to-image (I2I) generation.
This study focuses on reducing redundant computation in SDM and optimizing the model through both tuning and tuning-free methods.
arXiv Detail & Related papers (2024-05-31T21:47:05Z) - Bilateral Network with Residual U-blocks and Dual-Guided Attention for
Real-time Semantic Segmentation [18.393208069320362]
We design a new fusion mechanism for two-branch architecture which is guided by attention computation.
To be precise, we use the Dual-Guided Attention (DGA) module we proposed to replace some multi-scale transformations.
Experiments on Cityscapes and CamVid dataset show the effectiveness of our method.
arXiv Detail & Related papers (2023-10-31T09:20:59Z) - An Efficient Speech Separation Network Based on Recurrent Fusion Dilated
Convolution and Channel Attention [0.2538209532048866]
We present an efficient speech separation neural network, ARFDCN, which combines dilated convolutions, multi-scale fusion (MSF), and channel attention.
Experimental results indicate that the model achieves a decent balance between performance and computational efficiency.
arXiv Detail & Related papers (2023-06-09T13:30:27Z) - Efficient Encoder-Decoder and Dual-Path Conformer for Comprehensive
Feature Learning in Speech Enhancement [0.2538209532048866]
This paper proposes a time-frequency (T-F) domain speech enhancement network (DPCFCS-Net)
It incorporates improved densely connected blocks, dual-path modules, convolution-augmented transformers (conformers), channel attention, and spatial attention.
Compared with previous models, our proposed model has a more efficient encoder-decoder and can learn comprehensive features.
arXiv Detail & Related papers (2023-06-09T12:52:01Z) - Joint Channel Estimation and Feedback with Masked Token Transformers in
Massive MIMO Systems [74.52117784544758]
This paper proposes an encoder-decoder based network that unveils the intrinsic frequency-domain correlation within the CSI matrix.
The entire encoder-decoder network is utilized for channel compression.
Our method outperforms state-of-the-art channel estimation and feedback techniques in joint tasks.
arXiv Detail & Related papers (2023-06-08T06:15:17Z) - DCELANM-Net:Medical Image Segmentation based on Dual Channel Efficient
Layer Aggregation Network with Learner [0.0]
The DCELANM-Net structure is a model that ingeniously combines a Dual Channel Efficient Layer Aggregation Network (DCELAN) and a Micro Masked Autoencoder (Micro-MAE)
We adopted Micro-MAE as the learner of the model. In addition to being straightforward in its methodology, it also offers a self-supervised learning method, which has the benefit of being incredibly scaleable for the model.
arXiv Detail & Related papers (2023-04-19T12:57:52Z) - A Generic Shared Attention Mechanism for Various Backbone Neural Networks [53.36677373145012]
Self-attention modules (SAMs) produce strongly correlated attention maps across different layers.
Dense-and-Implicit Attention (DIA) shares SAMs across layers and employs a long short-term memory module.
Our simple yet effective DIA can consistently enhance various network backbones.
arXiv Detail & Related papers (2022-10-27T13:24:08Z) - Anchor-free Small-scale Multispectral Pedestrian Detection [88.7497134369344]
We propose a method for effective and efficient multispectral fusion of the two modalities in an adapted single-stage anchor-free base architecture.
We aim at learning pedestrian representations based on object center and scale rather than direct bounding box predictions.
Results show our method's effectiveness in detecting small-scaled pedestrians.
arXiv Detail & Related papers (2020-08-19T13:13:01Z) - When Residual Learning Meets Dense Aggregation: Rethinking the
Aggregation of Deep Neural Networks [57.0502745301132]
We propose Micro-Dense Nets, a novel architecture with global residual learning and local micro-dense aggregations.
Our micro-dense block can be integrated with neural architecture search based models to boost their performance.
arXiv Detail & Related papers (2020-04-19T08:34:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.