Implicit regularization of dropout
- URL: http://arxiv.org/abs/2207.05952v2
- Date: Mon, 10 Apr 2023 08:26:42 GMT
- Title: Implicit regularization of dropout
- Authors: Zhongwang Zhang and Zhi-Qin John Xu
- Abstract summary: It is important to understand how dropout, a popular regularization method, aids in achieving a good generalization solution during neural network training.
In this work, we present a theoretical derivation of an implicit regularization of dropout, which is validated by a series of experiments.
We experimentally find that the training with dropout leads to the neural network with a flatter minimum compared with standard gradient descent training.
- Score: 3.42658286826597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is important to understand how dropout, a popular regularization method,
aids in achieving a good generalization solution during neural network
training. In this work, we present a theoretical derivation of an implicit
regularization of dropout, which is validated by a series of experiments.
Additionally, we numerically study two implications of the implicit
regularization, which intuitively rationalizes why dropout helps
generalization. Firstly, we find that input weights of hidden neurons tend to
condense on isolated orientations trained with dropout. Condensation is a
feature in the non-linear learning process, which makes the network less
complex. Secondly, we experimentally find that the training with dropout leads
to the neural network with a flatter minimum compared with standard gradient
descent training, and the implicit regularization is the key to finding flat
solutions. Although our theory mainly focuses on dropout used in the last
hidden layer, our experiments apply to general dropout in training neural
networks. This work points out a distinct characteristic of dropout compared
with stochastic gradient descent and serves as an important basis for fully
understanding dropout.
Related papers
- Implicit Bias of Gradient Descent for Two-layer ReLU and Leaky ReLU
Networks on Nearly-orthogonal Data [66.1211659120882]
The implicit bias towards solutions with favorable properties is believed to be a key reason why neural networks trained by gradient-based optimization can generalize well.
While the implicit bias of gradient flow has been widely studied for homogeneous neural networks (including ReLU and leaky ReLU networks), the implicit bias of gradient descent is currently only understood for smooth neural networks.
arXiv Detail & Related papers (2023-10-29T08:47:48Z) - A Scalable Walsh-Hadamard Regularizer to Overcome the Low-degree
Spectral Bias of Neural Networks [79.28094304325116]
Despite the capacity of neural nets to learn arbitrary functions, models trained through gradient descent often exhibit a bias towards simpler'' functions.
We show how this spectral bias towards low-degree frequencies can in fact hurt the neural network's generalization on real-world datasets.
We propose a new scalable functional regularization scheme that aids the neural network to learn higher degree frequencies.
arXiv Detail & Related papers (2023-05-16T20:06:01Z) - Dropout Reduces Underfitting [85.61466286688385]
In this study, we demonstrate that dropout can also mitigate underfitting when used at the start of training.
We find dropout reduces the directional variance of gradients across mini-batches and helps align the mini-batch gradients with the entire dataset's gradient.
Our findings lead us to a solution for improving performance in underfitting models - early dropout: dropout is applied only during the initial phases of training, and turned off afterwards.
arXiv Detail & Related papers (2023-03-02T18:59:15Z) - Theoretical Characterization of How Neural Network Pruning Affects its
Generalization [131.1347309639727]
This work makes the first attempt to study how different pruning fractions affect the model's gradient descent dynamics and generalization.
It is shown that as long as the pruning fraction is below a certain threshold, gradient descent can drive the training loss toward zero.
More surprisingly, the generalization bound gets better as the pruning fraction gets larger.
arXiv Detail & Related papers (2023-01-01T03:10:45Z) - Information Geometry of Dropout Training [5.990174495635326]
Dropout is one of the most popular regularization techniques in neural network training.
In this paper, several properties of dropout are discussed in a unified manner from the viewpoint of information geometry.
arXiv Detail & Related papers (2022-06-22T09:27:41Z) - A variance principle explains why dropout finds flatter minima [0.0]
We show that the training with dropout finds the neural network with a flatter minimum compared with standard gradient descent training.
We propose a it Variance Principle that the variance of a noise is larger at the sharper direction of the loss landscape.
arXiv Detail & Related papers (2021-11-01T15:26:19Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - Gradient Descent for Deep Matrix Factorization: Dynamics and Implicit
Bias towards Low Rank [1.9350867959464846]
In deep learning, gradientdescent tends to prefer solutions which generalize well.
In this paper we analyze the dynamics of gradient descent in the simplifiedsetting of linear networks and of an estimation problem.
arXiv Detail & Related papers (2020-11-27T15:08:34Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.