The Efficacy of $L_1$ Regularization in Two-Layer Neural Networks
- URL: http://arxiv.org/abs/2010.01048v1
- Date: Fri, 2 Oct 2020 15:23:22 GMT
- Title: The Efficacy of $L_1$ Regularization in Two-Layer Neural Networks
- Authors: Gen Li, Yuantao Gu, Jie Ding
- Abstract summary: A crucial problem in neural networks is to select the most appropriate number of hidden neurons and obtain tight statistical risk bounds.
We show that $L_1$ regularization can control the generalization error and sparsify the input dimension.
An excessively large number of neurons do not necessarily inflate generalization errors under a suitable regularization.
- Score: 36.753907384994704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A crucial problem in neural networks is to select the most appropriate number
of hidden neurons and obtain tight statistical risk bounds. In this work, we
present a new perspective towards the bias-variance tradeoff in neural
networks. As an alternative to selecting the number of neurons, we
theoretically show that $L_1$ regularization can control the generalization
error and sparsify the input dimension. In particular, with an appropriate
$L_1$ regularization on the output layer, the network can produce a statistical
risk that is near minimax optimal. Moreover, an appropriate $L_1$
regularization on the input layer leads to a risk bound that does not involve
the input data dimension. Our analysis is based on a new amalgamation of
dimension-based and norm-based complexity analysis to bound the generalization
error. A consequent observation from our results is that an excessively large
number of neurons do not necessarily inflate generalization errors under a
suitable regularization.
Related papers
- Optimized classification with neural ODEs via separability [0.0]
Classification of $N$ points becomes a simultaneous control problem when viewed through the lens of neural ordinary differential equations (neural ODEs)
In this study, we focus on estimating the number of neurons required for efficient cluster-based classification.
We propose a new constructive algorithm that simultaneously classifies clusters of $d$ points from any initial configuration.
arXiv Detail & Related papers (2023-12-21T12:56:40Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - Normalization effects on deep neural networks [20.48472873675696]
We study the effect of the choice of the $gamma_i$ on the statistical behavior of the neural network's output.
We find that in terms of variance of the neural network's output and test accuracy the best choice is to choose the $gamma_i$s to be equal to one.
arXiv Detail & Related papers (2022-09-02T17:05:55Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - The Rate of Convergence of Variation-Constrained Deep Neural Networks [35.393855471751756]
We show that a class of variation-constrained neural networks can achieve near-parametric rate $n-1/2+delta$ for an arbitrarily small constant $delta$.
The result indicates that the neural function space needed for approximating smooth functions may not be as large as what is often perceived.
arXiv Detail & Related papers (2021-06-22T21:28:00Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - An efficient projection neural network for $\ell_1$-regularized logistic
regression [10.517079029721257]
This paper presents a simple projection neural network for $ell_$-regularized logistics regression.
The proposed neural network does not require any extra auxiliary variable nor any smooth approximation.
We also investigate the convergence of the proposed neural network by using the Lyapunov theory and show that it converges to a solution of the problem with any arbitrary initial value.
arXiv Detail & Related papers (2021-05-12T06:13:44Z) - Normalization effects on shallow neural networks and related asymptotic
expansions [20.48472873675696]
In particular, we investigate the effect of different scaling schemes, which lead to different normalizations of the neural network, on the network's statistical output.
We develop an expansion for the neural network's statistical output pointwise with respect to the scaling parameter as the number of hidden units grows to infinity.
We show that to leading order in $N$, the variance of the neural network's statistical output decays as the implied normalization by the scaling parameter approaches the mean field normalization.
arXiv Detail & Related papers (2020-11-20T16:33:28Z) - Measuring Model Complexity of Neural Networks with Curve Activation
Functions [100.98319505253797]
We propose the linear approximation neural network (LANN) to approximate a given deep model with curve activation function.
We experimentally explore the training process of neural networks and detect overfitting.
We find that the $L1$ and $L2$ regularizations suppress the increase of model complexity.
arXiv Detail & Related papers (2020-06-16T07:38:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.