Training Overparametrized Neural Networks in Sublinear Time
- URL: http://arxiv.org/abs/2208.04508v2
- Date: Thu, 8 Feb 2024 00:26:31 GMT
- Title: Training Overparametrized Neural Networks in Sublinear Time
- Authors: Yichuan Deng, Hang Hu, Zhao Song, Omri Weinstein, Danyang Zhuo
- Abstract summary: Deep learning comes at a tremendous computational and energy cost.
We present a new and a subset of binary neural networks, as a small subset of search trees, where each corresponds to a subset of search trees (Ds)
We believe this view would have further applications in analysis analysis of deep networks (Ds)
- Score: 14.918404733024332
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The success of deep learning comes at a tremendous computational and energy
cost, and the scalability of training massively overparametrized neural
networks is becoming a real barrier to the progress of artificial intelligence
(AI). Despite the popularity and low cost-per-iteration of traditional
backpropagation via gradient decent, stochastic gradient descent (SGD) has
prohibitive convergence rate in non-convex settings, both in theory and
practice.
To mitigate this cost, recent works have proposed to employ alternative
(Newton-type) training methods with much faster convergence rate, albeit with
higher cost-per-iteration. For a typical neural network with
$m=\mathrm{poly}(n)$ parameters and input batch of $n$ datapoints in
$\mathbb{R}^d$, the previous work of [Brand, Peng, Song, and Weinstein,
ITCS'2021] requires $\sim mnd + n^3$ time per iteration. In this paper, we
present a novel training method that requires only $m^{1-\alpha} n d + n^3$
amortized time in the same overparametrized regime, where $\alpha \in (0.01,1)$
is some fixed constant. This method relies on a new and alternative view of
neural networks, as a set of binary search trees, where each iteration
corresponds to modifying a small subset of the nodes in the tree. We believe
this view would have further applications in the design and analysis of deep
neural networks (DNNs).
Related papers
- Generalization and Stability of Interpolating Neural Networks with
Minimal Width [37.908159361149835]
We investigate the generalization and optimization of shallow neural-networks trained by gradient in the interpolating regime.
We prove the training loss number minimizations $m=Omega(log4 (n))$ neurons and neurons $Tapprox n$.
With $m=Omega(log4 (n))$ neurons and $Tapprox n$, we bound the test loss training by $tildeO (1/)$.
arXiv Detail & Related papers (2023-02-18T05:06:15Z) - Bypass Exponential Time Preprocessing: Fast Neural Network Training via
Weight-Data Correlation Preprocessing [16.35997749365563]
State-of-art deep neural networks are becoming larger in size every year to deliver increasing model accuracy.
Recent work [Song, Yang and Zhang, NeurIPS 2021] reduces this time per iteration to $o(nmd)$, but requires exponential time to preprocess either the data or the neural network weights.
We present a new preprocessing method that simply stores the weight-data correlation in a tree data structure in order to quickly, dynamically detect which neurons fire at each iteration.
arXiv Detail & Related papers (2022-11-25T16:40:49Z) - A Sublinear Adversarial Training Algorithm [13.42699247306472]
We analyze the convergence guarantee of adversarial training procedure on a two-layer neural network with shifted ReLU activation.
We develop an algorithm for adversarial training with time cost $o(m n d)$ per iteration by applying half-space reporting data structure.
arXiv Detail & Related papers (2022-08-10T15:31:40Z) - Bounding the Width of Neural Networks via Coupled Initialization -- A
Worst Case Analysis [121.9821494461427]
We show how to significantly reduce the number of neurons required for two-layer ReLU networks.
We also prove new lower bounds that improve upon prior work, and that under certain assumptions, are best possible.
arXiv Detail & Related papers (2022-06-26T06:51:31Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Does Preprocessing Help Training Over-parameterized Neural Networks? [19.64638346701198]
We propose two novel preprocessing ideas to bypass the $Omega(mnd)$ barrier.
Our results provide theoretical insights for a large number of previously established fast training methods.
arXiv Detail & Related papers (2021-10-09T18:16:23Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Beyond Lazy Training for Over-parameterized Tensor Decomposition [69.4699995828506]
We show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
arXiv Detail & Related papers (2020-10-22T00:32:12Z) - Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK [58.5766737343951]
We consider the dynamic of descent for learning a two-layer neural network.
We show that an over-parametrized two-layer neural network can provably learn with gradient loss at most ground with Tangent samples.
arXiv Detail & Related papers (2020-07-09T07:09:28Z) - Backward Feature Correction: How Deep Learning Performs Deep
(Hierarchical) Learning [66.05472746340142]
This paper analyzes how multi-layer neural networks can perform hierarchical learning _efficiently_ and _automatically_ by SGD on the training objective.
We establish a new principle called "backward feature correction", where the errors in the lower-level features can be automatically corrected when training together with the higher-level layers.
arXiv Detail & Related papers (2020-01-13T17:28:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.