Analysis of the expected $L_2$ error of an over-parametrized deep neural
network estimate learned by gradient descent without regularization
- URL: http://arxiv.org/abs/2311.14609v1
- Date: Fri, 24 Nov 2023 17:04:21 GMT
- Title: Analysis of the expected $L_2$ error of an over-parametrized deep neural
network estimate learned by gradient descent without regularization
- Authors: Selina Drews and Michael Kohler
- Abstract summary: Recent results show that estimates defined by over-parametrized deep neural networks learned by applying gradient descent to a regularized empirical $L$ risk are universally consistent.
In this paper, we show that the regularization term is not necessary to obtain similar results.
- Score: 7.977229957867868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent results show that estimates defined by over-parametrized deep neural
networks learned by applying gradient descent to a regularized empirical $L_2$
risk are universally consistent and achieve good rates of convergence. In this
paper, we show that the regularization term is not necessary to obtain similar
results. In the case of a suitably chosen initialization of the network, a
suitable number of gradient descent steps, and a suitable step size we show
that an estimate without a regularization term is universally consistent for
bounded predictor variables. Additionally, we show that if the regression
function is H\"older smooth with H\"older exponent $1/2 \leq p \leq 1$, the
$L_2$ error converges to zero with a convergence rate of approximately
$n^{-1/(1+d)}$. Furthermore, in case of an interaction model, where the
regression function consists of a sum of H\"older smooth functions with $d^*$
components, a rate of convergence is derived which does not depend on the input
dimension $d$.
Related papers
- Convergence Rate Analysis of LION [54.28350823319057]
LION converges iterations of $cal(sqrtdK-)$ measured by gradient Karush-Kuhn-T (sqrtdK-)$.
We show that LION can achieve lower loss and higher performance compared to standard SGD.
arXiv Detail & Related papers (2024-11-12T11:30:53Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Convergence of Adam Under Relaxed Assumptions [72.24779199744954]
We show that Adam converges to $epsilon$-stationary points with $O(epsilon-4)$ gradient complexity under far more realistic conditions.
We also propose a variance-reduced version of Adam with an accelerated gradient complexity of $O(epsilon-3)$.
arXiv Detail & Related papers (2023-04-27T06:27:37Z) - Provably Efficient Convergence of Primal-Dual Actor-Critic with
Nonlinear Function Approximation [15.319335698574932]
We show the first efficient convergence result with primal-dual actor-critic with a convergence of $mathcalOleft ascent(Nright)Nright)$ under Polyian sampling.
Results on Open GymAI continuous control tasks.
arXiv Detail & Related papers (2022-02-28T15:16:23Z) - High-probability Bounds for Non-Convex Stochastic Optimization with
Heavy Tails [55.561406656549686]
We consider non- Hilbert optimization using first-order algorithms for which the gradient estimates may have tails.
We show that a combination of gradient, momentum, and normalized gradient descent convergence to critical points in high-probability with best-known iteration for smooth losses.
arXiv Detail & Related papers (2021-06-28T00:17:01Z) - A New Framework for Variance-Reduced Hamiltonian Monte Carlo [88.84622104944503]
We propose a new framework of variance-reduced Hamiltonian Monte Carlo (HMC) methods for sampling from an $L$-smooth and $m$-strongly log-concave distribution.
We show that the unbiased gradient estimators, including SAGA and SVRG, based HMC methods achieve highest gradient efficiency with small batch size.
Experimental results on both synthetic and real-world benchmark data show that our new framework significantly outperforms the full gradient and gradient HMC approaches.
arXiv Detail & Related papers (2021-02-09T02:44:24Z) - Structure Learning in Inverse Ising Problems Using $\ell_2$-Regularized
Linear Estimator [8.89493507314525]
We show that despite the model mismatch, one can perfectly identify the network structure using naive linear regression without regularization.
We propose a two-stage estimator: In the first stage, the ridge regression is used and the estimates are pruned by a relatively small threshold.
This estimator with the appropriate regularization coefficient and thresholds is shown to achieve the perfect identification of the network structure even in $0M/N1$.
arXiv Detail & Related papers (2020-08-19T09:11:33Z) - Tight Nonparametric Convergence Rates for Stochastic Gradient Descent
under the Noiseless Linear Model [0.0]
We analyze the convergence of single-pass, fixed step-size gradient descent on the least-square risk under this model.
As a special case, we analyze an online algorithm for estimating a real function on the unit interval from the noiseless observation of its value at randomly sampled points.
arXiv Detail & Related papers (2020-06-15T08:25:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.