The Lottery Ticket Hypothesis for Self-attention in Convolutional Neural
Network
- URL: http://arxiv.org/abs/2207.07858v1
- Date: Sat, 16 Jul 2022 07:08:59 GMT
- Title: The Lottery Ticket Hypothesis for Self-attention in Convolutional Neural
Network
- Authors: Zhongzhan Huang, Senwei Liang, Mingfu Liang, Wei He, Haizhao Yang and
Liang Lin
- Abstract summary: Recently many plug-and-play self-attention modules (SAMs) are proposed to enhance the model generalization by exploiting the internal information of deep convolutional neural networks (CNNs)
We empirically find and verify some counterintuitive phenomena that: (a) Connecting the SAMs to all the blocks may not always bring the largest performance boost, and connecting to partial blocks would be even better; (b) Adding the SAMs to a CNN may not always bring a performance boost, and instead it may even harm the performance of the original CNN backbone.
- Score: 69.54809052377189
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently many plug-and-play self-attention modules (SAMs) are proposed to
enhance the model generalization by exploiting the internal information of deep
convolutional neural networks (CNNs). In general, previous works ignore where
to plug in the SAMs since they connect the SAMs individually with each block of
the entire CNN backbone for granted, leading to incremental computational cost
and the number of parameters with the growth of network depth. However, we
empirically find and verify some counterintuitive phenomena that: (a)
Connecting the SAMs to all the blocks may not always bring the largest
performance boost, and connecting to partial blocks would be even better; (b)
Adding the SAMs to a CNN may not always bring a performance boost, and instead
it may even harm the performance of the original CNN backbone. Therefore, we
articulate and demonstrate the Lottery Ticket Hypothesis for Self-attention
Networks: a full self-attention network contains a subnetwork with sparse
self-attention connections that can (1) accelerate inference, (2) reduce extra
parameter increment, and (3) maintain accuracy. In addition to the empirical
evidence, this hypothesis is also supported by our theoretical evidence.
Furthermore, we propose a simple yet effective reinforcement-learning-based
method to search the ticket, i.e., the connection scheme that satisfies the
three above-mentioned conditions. Extensive experiments on widely-used
benchmark datasets and popular self-attention networks show the effectiveness
of our method. Besides, our experiments illustrate that our searched ticket has
the capacity of transferring to some vision tasks, e.g., crowd counting and
segmentation.
Related papers
- Causal GNNs: A GNN-Driven Instrumental Variable Approach for Causal Inference in Networks [0.0]
CgNN is a novel approach to mitigate hidden confounder bias and improve causal effect estimation.
Our results demonstrate that CgNN effectively mitigates hidden confounder bias and offers a robust GNN-driven IV framework for causal inference in complex network data.
arXiv Detail & Related papers (2024-09-13T05:39:00Z) - Exploring the Lottery Ticket Hypothesis with Explainability Methods:
Insights into Sparse Network Performance [13.773050123620592]
Lottery Ticket Hypothesis (LTH) finds a network within a deep network with comparable or superior performance to the original model.
In this work, we examine why the performance of the pruned networks gradually increases or decreases.
arXiv Detail & Related papers (2023-07-07T18:33:52Z) - Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - A Generic Shared Attention Mechanism for Various Backbone Neural Networks [53.36677373145012]
Self-attention modules (SAMs) produce strongly correlated attention maps across different layers.
Dense-and-Implicit Attention (DIA) shares SAMs across layers and employs a long short-term memory module.
Our simple yet effective DIA can consistently enhance various network backbones.
arXiv Detail & Related papers (2022-10-27T13:24:08Z) - NetSentry: A Deep Learning Approach to Detecting Incipient Large-scale
Network Attacks [9.194664029847019]
We show how to use Machine Learning for Network Intrusion Detection (NID) in a principled way.
We propose NetSentry, perhaps the first of its kind NIDS that builds on Bi-ALSTM, an original ensemble of sequential neural models.
We demonstrate F1 score gains above 33% over the state-of-the-art, as well as up to 3 times higher rates of detecting attacks such as XSS and web bruteforce.
arXiv Detail & Related papers (2022-02-20T17:41:02Z) - Sifting out the features by pruning: Are convolutional networks the
winning lottery ticket of fully connected ones? [16.5745082442791]
We study the inductive bias that pruning imprints in such "winning lottery tickets"
We show that the surviving node connectivity is local in input space, and organized in patterns reminiscent of the ones found in convolutional networks (CNN)
arXiv Detail & Related papers (2021-04-27T17:25:54Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - CIFS: Improving Adversarial Robustness of CNNs via Channel-wise
Importance-based Feature Selection [186.34889055196925]
We investigate the adversarial robustness of CNNs from the perspective of channel-wise activations.
We observe that adversarial training (AT) robustifies CNNs by aligning the channel-wise activations of adversarial data with those of their natural counterparts.
We introduce a novel mechanism, i.e., underlineChannel-wise underlineImportance-based underlineFeature underlineSelection (CIFS)
arXiv Detail & Related papers (2021-02-10T08:16:43Z) - Sequence-to-Sequence Load Disaggregation Using Multi-Scale Residual
Neural Network [4.094944573107066]
Non-Intrusive Load Monitoring (NILM) has received more and more attention as a cost-effective way to monitor electricity.
Deep neural networks has been shown a great potential in the field of load disaggregation.
arXiv Detail & Related papers (2020-09-25T17:41:28Z) - Channel Equilibrium Networks for Learning Deep Representation [63.76618960820138]
This work shows that the combination of normalization and rectified linear function leads to inhibited channels.
Unlike prior arts that simply removed the inhibited channels, we propose to "wake them up" during training by designing a novel neural building block.
Channel Equilibrium (CE) block enables channels at the same layer to contribute equally to the learned representation.
arXiv Detail & Related papers (2020-02-29T09:02:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.