AutoDropout: Learning Dropout Patterns to Regularize Deep Networks
- URL: http://arxiv.org/abs/2101.01761v1
- Date: Tue, 5 Jan 2021 19:54:22 GMT
- Title: AutoDropout: Learning Dropout Patterns to Regularize Deep Networks
- Authors: Hieu Pham, Quoc V. Le
- Abstract summary: Dropout or weight decay methods do not leverage the structures of the network's inputs and hidden states.
We show that this method works well for both image recognition on CIFAR-10 and ImageNet, as well as language modeling on Penn Treebank and WikiText-2.
The learned dropout patterns also transfers to different tasks and datasets, such as from language model on Penn Treebank to Engligh-French translation on WMT 2014.
- Score: 82.28118615561912
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks are often over-parameterized and hence benefit from
aggressive regularization. Conventional regularization methods, such as Dropout
or weight decay, do not leverage the structures of the network's inputs and
hidden states. As a result, these conventional methods are less effective than
methods that leverage the structures, such as SpatialDropout and DropBlock,
which randomly drop the values at certain contiguous areas in the hidden states
and setting them to zero. Although the locations of dropout areas random, the
patterns of SpatialDropout and DropBlock are manually designed and fixed. Here
we propose to learn the dropout patterns. In our method, a controller learns to
generate a dropout pattern at every channel and layer of a target network, such
as a ConvNet or a Transformer. The target network is then trained with the
dropout pattern, and its resulting validation performance is used as a signal
for the controller to learn from. We show that this method works well for both
image recognition on CIFAR-10 and ImageNet, as well as language modeling on
Penn Treebank and WikiText-2. The learned dropout patterns also transfers to
different tasks and datasets, such as from language model on Penn Treebank to
Engligh-French translation on WMT 2014. Our code will be available.
Related papers
- ChannelDropBack: Forward-Consistent Stochastic Regularization for Deep Networks [5.00301731167245]
Existing techniques often require modifying the architecture of the network by adding specialized layers.
We present ChannelDropBack, a simple regularization approach that introduces randomness only into the backward information flow.
It allows for seamless integration into the training process of any model and layers without the need to change its architecture.
arXiv Detail & Related papers (2024-11-16T21:24:44Z) - WiNet: Wavelet-based Incremental Learning for Efficient Medical Image Registration [68.25711405944239]
Deep image registration has demonstrated exceptional accuracy and fast inference.
Recent advances have adopted either multiple cascades or pyramid architectures to estimate dense deformation fields in a coarse-to-fine manner.
We introduce a model-driven WiNet that incrementally estimates scale-wise wavelet coefficients for the displacement/velocity field across various scales.
arXiv Detail & Related papers (2024-07-18T11:51:01Z) - Revisiting Structured Dropout [11.011268090482577]
textbfemphProbDropBlock drops contiguous blocks from feature maps with a probability given by the normalized feature salience values.
We find that with a simple scheduling strategy the proposed approach to structured Dropout consistently improved model performance compared to baselines.
arXiv Detail & Related papers (2022-10-05T21:26:57Z) - Unsupervised Industrial Anomaly Detection via Pattern Generative and Contrastive Networks [6.393288885927437]
We propose Vision Transformer based (VIT) unsupervised anomaly detection network.
It utilizes a hierarchical task learning and human experience to enhance its interpretability.
Our method achieves 99.8% AUC, which surpasses previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-20T10:09:53Z) - R-Drop: Regularized Dropout for Neural Networks [99.42791938544012]
Dropout is a powerful and widely used technique to regularize the training of deep neural networks.
We introduce a simple regularization strategy upon dropout in model training, namely R-Drop, which forces the output distributions of different sub models to be consistent with each other.
arXiv Detail & Related papers (2021-06-28T08:01:26Z) - FocusedDropout for Convolutional Neural Network [6.066543113636522]
FocusedDropout is a non-random dropout method to make the network focus more on the target.
Even a slight cost, 10% of batches employing FocusedDropout, can produce a nice performance boost over the baselines.
arXiv Detail & Related papers (2021-03-29T08:47:55Z) - SelectScale: Mining More Patterns from Images via Selective and Soft
Dropout [35.066419181817594]
Convolutional neural networks (CNNs) have achieved remarkable success in image recognition.
We propose SelectScale, which selects the important features in networks and adjusts them during training.
Using SelectScale, we improve the performance of CNNs on CIFAR and ImageNet.
arXiv Detail & Related papers (2020-11-30T12:15:08Z) - Informative Dropout for Robust Representation Learning: A Shape-bias
Perspective [84.30946377024297]
We propose a light-weight model-agnostic method, namely Informative Dropout (InfoDrop), to improve interpretability and reduce texture bias.
Specifically, we discriminate texture from shape based on local self-information in an image, and adopt a Dropout-like algorithm to decorrelate the model output from the local texture.
arXiv Detail & Related papers (2020-08-10T16:52:24Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.