On Learnability via Gradient Method for Two-Layer ReLU Neural Networks
in Teacher-Student Setting
- URL: http://arxiv.org/abs/2106.06251v1
- Date: Fri, 11 Jun 2021 09:05:41 GMT
- Title: On Learnability via Gradient Method for Two-Layer ReLU Neural Networks
in Teacher-Student Setting
- Authors: Shunta Akiyama and Taiji Suzuki
- Abstract summary: We study two-layer ReLU networks in a teacher-student regression model.
We show that with a specific regularization and sufficient over- parameterization, a student network can identify the parameters via descent.
We analyze the global minima on a sparse global property in the measure space.
- Score: 41.60125423028092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning empirically achieves high performance in many applications, but
its training dynamics has not been fully understood theoretically. In this
paper, we explore theoretical analysis on training two-layer ReLU neural
networks in a teacher-student regression model, in which a student network
learns an unknown teacher network through its outputs. We show that with a
specific regularization and sufficient over-parameterization, the student
network can identify the parameters of the teacher network with high
probability via gradient descent with a norm dependent stepsize even though the
objective function is highly non-convex. The key theoretical tool is the
measure representation of the neural networks and a novel application of a dual
certificate argument for sparse estimation on a measure space. We analyze the
global minima and global convergence property in the measure space.
Related papers
- Fundamental limits of overparametrized shallow neural networks for
supervised learning [11.136777922498355]
We study a two-layer neural network trained from input-output pairs generated by a teacher network with matching architecture.
Our results come in the form of bounds relating i) the mutual information between training data and network weights, or ii) the Bayes-optimal generalization error.
arXiv Detail & Related papers (2023-07-11T08:30:50Z) - Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural
Networks [49.808194368781095]
We show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks.
This work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime.
arXiv Detail & Related papers (2023-05-11T17:19:30Z) - Excess Risk of Two-Layer ReLU Neural Networks in Teacher-Student
Settings and its Superiority to Kernel Methods [58.44819696433327]
We investigate the risk of two-layer ReLU neural networks in a teacher regression model.
We find that the student network provably outperforms any solution methods.
arXiv Detail & Related papers (2022-05-30T02:51:36Z) - On Feature Learning in Neural Networks with Global Convergence
Guarantees [49.870593940818715]
We study the optimization of wide neural networks (NNs) via gradient flow (GF)
We show that when the input dimension is no less than the size of the training set, the training loss converges to zero at a linear rate under GF.
We also show empirically that, unlike in the Neural Tangent Kernel (NTK) regime, our multi-layer model exhibits feature learning and can achieve better generalization performance than its NTK counterpart.
arXiv Detail & Related papers (2022-04-22T15:56:43Z) - Proxy Convexity: A Unified Framework for the Analysis of Neural Networks
Trained by Gradient Descent [95.94432031144716]
We propose a unified non- optimization framework for the analysis of a learning network.
We show that existing guarantees can be trained unified through gradient descent.
arXiv Detail & Related papers (2021-06-25T17:45:00Z) - Learning and Generalization in Overparameterized Normalizing Flows [13.074242275886977]
Normalizing flows (NFs) constitute an important class of models in unsupervised learning.
We provide theoretical and empirical evidence that for a class of NFs containing most of the existing NF models, overparametrization hurts training.
We prove that unconstrained NFs can efficiently learn any reasonable data distribution under minimal assumptions when the underlying network is overparametrized.
arXiv Detail & Related papers (2021-06-19T17:11:42Z) - Statistical Mechanics of Deep Linear Neural Networks: The
Back-Propagating Renormalization Group [4.56877715768796]
We study the statistical mechanics of learning in Deep Linear Neural Networks (DLNNs) in which the input-output function of an individual unit is linear.
We solve exactly the network properties following supervised learning using an equilibrium Gibbs distribution in the weight space.
Our numerical simulations reveal that despite the nonlinearity, the predictions of our theory are largely shared by ReLU networks with modest depth.
arXiv Detail & Related papers (2020-12-07T20:08:31Z) - Generalization bound of globally optimal non-convex neural network
training: Transportation map estimation by infinite dimensional Langevin
dynamics [50.83356836818667]
We introduce a new theoretical framework to analyze deep learning optimization with connection to its generalization error.
Existing frameworks such as mean field theory and neural tangent kernel theory for neural network optimization analysis typically require taking limit of infinite width of the network to show its global convergence.
arXiv Detail & Related papers (2020-07-11T18:19:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.