Is Stochastic Gradient Descent Near Optimal?
- URL: http://arxiv.org/abs/2209.08627v1
- Date: Sun, 18 Sep 2022 18:26:43 GMT
- Title: Is Stochastic Gradient Descent Near Optimal?
- Authors: Yifan Zhu (1), Hong Jun Jeon (1), Benjamin Van Roy (1) ((1) Stanford
University Department of Electrical Engineering)
- Abstract summary: We show that gradient descent achieves small expected error with a number of samples and total number of queries.
This suggests that SGD nearly achieves the information-theoretic sample complexity bounds of Joen & Van Roy (arXiv:2203.00246) in a computationally efficient manner.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The success of neural networks over the past decade has established them as
effective models for many relevant data generating processes. Statistical
theory on neural networks indicates graceful scaling of sample complexity. For
example, Joen & Van Roy (arXiv:2203.00246) demonstrate that, when data is
generated by a ReLU teacher network with $W$ parameters, an optimal learner
needs only $\tilde{O}(W/\epsilon)$ samples to attain expected error $\epsilon$.
However, existing computational theory suggests that, even for
single-hidden-layer teacher networks, to attain small error for all such
teacher networks, the computation required to achieve this sample complexity is
intractable. In this work, we fit single-hidden-layer neural networks to data
generated by single-hidden-layer ReLU teacher networks with parameters drawn
from a natural distribution. We demonstrate that stochastic gradient descent
(SGD) with automated width selection attains small expected error with a number
of samples and total number of queries both nearly linear in the input
dimension and width. This suggests that SGD nearly achieves the
information-theoretic sample complexity bounds of Joen & Van Roy
(arXiv:2203.00246) in a computationally efficient manner. An important
difference between our positive empirical results and the negative theoretical
results is that the latter address worst-case error of deterministic
algorithms, while our analysis centers on expected error of a stochastic
algorithm.
Related papers
- Sharper Guarantees for Learning Neural Network Classifiers with Gradient Methods [43.32546195968771]
We study the data-dependent convergence and generalization behavior of gradient methods for neural networks with smooth activation.
Our results improve upon the shortcomings of the well-established Rademacher complexity-based bounds.
We show that a large step-size significantly improves upon the NTK regime's results in classifying the XOR distribution.
arXiv Detail & Related papers (2024-10-13T21:49:29Z) - Matching the Statistical Query Lower Bound for k-sparse Parity Problems with Stochastic Gradient Descent [83.85536329832722]
We show that gradient descent (SGD) can efficiently solve the $k$-parity problem on a $d$dimensional hypercube.
We then demonstrate how a trained neural network with SGD, solving the $k$-parity problem with small statistical errors.
arXiv Detail & Related papers (2024-04-18T17:57:53Z) - Sampling weights of deep neural networks [1.2370077627846041]
We introduce a probability distribution, combined with an efficient sampling algorithm, for weights and biases of fully-connected neural networks.
In a supervised learning context, no iterative optimization or gradient computations of internal network parameters are needed.
We prove that sampled networks are universal approximators.
arXiv Detail & Related papers (2023-06-29T10:13:36Z) - Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural
Networks [89.28881869440433]
This paper provides the first theoretical characterization of joint edge-model sparse learning for graph neural networks (GNNs)
It proves analytically that both sampling important nodes and pruning neurons with the lowest-magnitude can reduce the sample complexity and improve convergence without compromising the test accuracy.
arXiv Detail & Related papers (2023-02-06T16:54:20Z) - Finite Sample Identification of Wide Shallow Neural Networks with Biases [12.622813055808411]
The identification of the parameters of the network from finite samples of input-output pairs is often referred to as the emphteacher-student model
This paper fills the gap by providing constructive methods and theoretical guarantees of finite sample identification for such wider shallow networks with biases.
arXiv Detail & Related papers (2022-11-08T22:10:32Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Self-Regularity of Non-Negative Output Weights for Overparameterized
Two-Layer Neural Networks [16.64116123743938]
We consider the problem of finding a two-layer neural network with sigmoid, rectified linear unit (ReLU)
We then leverage our bounds to establish guarantees for such networks through emphfat-shattering dimension
Notably, our bounds also have good sample complexity (polynomials in $d$ with a low degree)
arXiv Detail & Related papers (2021-03-02T17:36:03Z) - Random Vector Functional Link Networks for Function Approximation on Manifolds [8.535815777849786]
We show that single layer neural-networks with random input-to-hidden layer weights and biases have seen success in practice.
We further adapt this randomized neural network architecture to approximate functions on smooth, compact submanifolds of Euclidean space.
arXiv Detail & Related papers (2020-07-30T23:50:44Z) - OSLNet: Deep Small-Sample Classification with an Orthogonal Softmax
Layer [77.90012156266324]
This paper aims to find a subspace of neural networks that can facilitate a large decision margin.
We propose the Orthogonal Softmax Layer (OSL), which makes the weight vectors in the classification layer remain during both the training and test processes.
Experimental results demonstrate that the proposed OSL has better performance than the methods used for comparison on four small-sample benchmark datasets.
arXiv Detail & Related papers (2020-04-20T02:41:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.