Systematic Architectural Design of Scale Transformed Attention Condenser
DNNs via Multi-Scale Class Representational Response Similarity Analysis
- URL: http://arxiv.org/abs/2306.10128v1
- Date: Fri, 16 Jun 2023 18:29:26 GMT
- Title: Systematic Architectural Design of Scale Transformed Attention Condenser
DNNs via Multi-Scale Class Representational Response Similarity Analysis
- Authors: Andre Hryniowski, Alexander Wong
- Abstract summary: We propose a novel type of analysis called Multi-Scale Class Representational Response Similarity Analysis (ClassRepSim)
We show that adding STAC modules to ResNet style architectures can result in up to a 1.6% increase in top-1 accuracy.
Results from ClassRepSim analysis can be used to select an effective parameterization of the STAC module resulting in competitive performance.
- Score: 93.0013343535411
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-attention mechanisms are commonly included in a convolutional neural
networks to achieve an improved efficiency performance balance. However, adding
self-attention mechanisms adds additional hyperparameters to tune for the
application at hand. In this work we propose a novel type of DNN analysis
called Multi-Scale Class Representational Response Similarity Analysis
(ClassRepSim) which can be used to identify specific design interventions that
lead to more efficient self-attention convolutional neural network
architectures. Using insights grained from ClassRepSim we propose the Spatial
Transformed Attention Condenser (STAC) module, a novel attention-condenser
based self-attention module. We show that adding STAC modules to ResNet style
architectures can result in up to a 1.6% increase in top-1 accuracy compared to
vanilla ResNet models and up to a 0.5% increase in top-1 accuracy compared to
SENet models on the ImageNet64x64 dataset, at the cost of up to 1.7% increase
in FLOPs and 2x the number of parameters. In addition, we demonstrate that
results from ClassRepSim analysis can be used to select an effective
parameterization of the STAC module resulting in competitive performance
compared to an extensive parameter search.
Related papers
- Auto-Spikformer: Spikformer Architecture Search [22.332981906087785]
Self-attention mechanisms have been integrated into Spiking Neural Networks (SNNs)
Recent advancements in SNN architecture, such as Spikformer, have demonstrated promising outcomes.
We propose Auto-Spikformer, a one-shot Transformer Architecture Search (TAS) method, which automates the quest for an optimized Spikformer architecture.
arXiv Detail & Related papers (2023-06-01T15:35:26Z) - Scaling Pre-trained Language Models to Deeper via Parameter-efficient
Architecture [68.13678918660872]
We design a more capable parameter-sharing architecture based on matrix product operator (MPO)
MPO decomposition can reorganize and factorize the information of a parameter matrix into two parts.
Our architecture shares the central tensor across all layers for reducing the model size.
arXiv Detail & Related papers (2023-03-27T02:34:09Z) - Receptive Field Refinement for Convolutional Neural Networks Reliably
Improves Predictive Performance [1.52292571922932]
We present a new approach to receptive field analysis that can yield these types of theoretical and empirical performance gains.
Our approach is able to improve ImageNet1K performance across a wide range of well-known, state-of-the-art (SOTA) model classes.
arXiv Detail & Related papers (2022-11-26T05:27:44Z) - Prompt Tuning for Parameter-efficient Medical Image Segmentation [79.09285179181225]
We propose and investigate several contributions to achieve a parameter-efficient but effective adaptation for semantic segmentation on two medical imaging datasets.
We pre-train this architecture with a dedicated dense self-supervision scheme based on assignments to online generated prototypes.
We demonstrate that the resulting neural network model is able to attenuate the gap between fully fine-tuned and parameter-efficiently adapted models.
arXiv Detail & Related papers (2022-11-16T21:55:05Z) - NAR-Former: Neural Architecture Representation Learning towards Holistic
Attributes Prediction [37.357949900603295]
We propose a neural architecture representation model that can be used to estimate attributes holistically.
Experiment results show that our proposed framework can be used to predict the latency and accuracy attributes of both cell architectures and whole deep neural networks.
arXiv Detail & Related papers (2022-11-15T10:15:21Z) - Dynamic Kernels and Channel Attention with Multi-Layer Embedding
Aggregation for Speaker Verification [28.833851817220616]
This paper proposes an approach to increase the model resolution capability using attention-based dynamic kernels in a convolutional neural network.
The proposed dynamic convolutional model achieved 1.62% EER and 0.18 miniDCF on the VoxCeleb1 test set and has a 17% relative improvement compared to the ECAPA-TDNN.
arXiv Detail & Related papers (2022-11-03T17:13:28Z) - A Generic Shared Attention Mechanism for Various Backbone Neural Networks [53.36677373145012]
Self-attention modules (SAMs) produce strongly correlated attention maps across different layers.
Dense-and-Implicit Attention (DIA) shares SAMs across layers and employs a long short-term memory module.
Our simple yet effective DIA can consistently enhance various network backbones.
arXiv Detail & Related papers (2022-10-27T13:24:08Z) - Spikformer: When Spiking Neural Network Meets Transformer [102.91330530210037]
We consider two biologically plausible structures, the Spiking Neural Network (SNN) and the self-attention mechanism.
We propose a novel Spiking Self Attention (SSA) as well as a powerful framework, named Spiking Transformer (Spikformer)
arXiv Detail & Related papers (2022-09-29T14:16:49Z) - Squeezeformer: An Efficient Transformer for Automatic Speech Recognition [99.349598600887]
Conformer is the de facto backbone model for various downstream speech tasks based on its hybrid attention-convolution architecture.
We propose the Squeezeformer model, which consistently outperforms the state-of-the-art ASR models under the same training schemes.
arXiv Detail & Related papers (2022-06-02T06:06:29Z) - Convolution Neural Network Hyperparameter Optimization Using Simplified
Swarm Optimization [2.322689362836168]
Convolutional Neural Network (CNN) is widely used in computer vision.
It is not easy to find a network architecture with better performance.
arXiv Detail & Related papers (2021-03-06T00:23:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.