Convergence of Two-Layer Regression with Nonlinear Units
- URL: http://arxiv.org/abs/2308.08358v1
- Date: Wed, 16 Aug 2023 13:30:45 GMT
- Title: Convergence of Two-Layer Regression with Nonlinear Units
- Authors: Yichuan Deng, Zhao Song, Shenghao Xie
- Abstract summary: We introduce a greedy algorithm based on approximate Newton method, which converges in the sense of the distance to optimal solution.
We relax the Lipschitz condition and prove the convergence in the sense of loss value.
- Score: 10.295897511849034
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large language models (LLMs), such as ChatGPT and GPT4, have shown
outstanding performance in many human life task. Attention computation plays an
important role in training LLMs. Softmax unit and ReLU unit are the key
structure in attention computation. Inspired by them, we put forward a softmax
ReLU regression problem. Generally speaking, our goal is to find an optimal
solution to the regression problem involving the ReLU unit. In this work, we
calculate a close form representation for the Hessian of the loss function.
Under certain assumptions, we prove the Lipschitz continuous and the PSDness of
the Hessian. Then, we introduce an greedy algorithm based on approximate Newton
method, which converges in the sense of the distance to optimal solution. Last,
We relax the Lipschitz condition and prove the convergence in the sense of loss
value.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Local Convergence of Approximate Newton Method for Two Layer Nonlinear
Regression [21.849997443967705]
Two-layer regression problem has been well-studied in prior works.
First layer is activated by a ReLU unit, and the second layer is activated by a softmax unit.
We prove that the loss function for the Hessian matrix is positive definite and Lipschitz continuous under certain assumptions.
arXiv Detail & Related papers (2023-11-26T19:19:02Z) - The Implicit Bias of Minima Stability in Multivariate Shallow ReLU
Networks [53.95175206863992]
We study the type of solutions to which gradient descent converges when used to train a single hidden-layer multivariate ReLU network with the quadratic loss.
We prove that although shallow ReLU networks are universal approximators, stable shallow networks are not.
arXiv Detail & Related papers (2023-06-30T09:17:39Z) - Inverse Reinforcement Learning With Constraint Recovery [3.8073142980732992]
We propose a novel inverse reinforcement learning (IRL) algorithm for constrained decision process (CMDP) problems.
We demonstrate the efficacy of our algorithm for the grid world environment.
arXiv Detail & Related papers (2023-05-14T11:49:37Z) - VO$Q$L: Towards Optimal Regret in Model-free RL with Nonlinear Function
Approximation [43.193807443491814]
We study time-inhomogeneous episodic reinforcement learning (RL) under general function approximation and sparse rewards.
We design a new algorithm, Variance-weighted Optimistic $Q$-Learning (VO$Q$L), based on $Q$-learning and bound its regret dimension to completeness and bounded Eluder for the regression function class.
arXiv Detail & Related papers (2022-12-12T17:37:00Z) - Sparsest Univariate Learning Models Under Lipschitz Constraint [31.28451181040038]
We propose continuous-domain formulations for one-dimensional regression problems.
We control the Lipschitz constant explicitly using a user-defined upper-bound.
We show that both problems admit global minimizers that are continuous and piecewise-linear.
arXiv Detail & Related papers (2021-12-27T07:03:43Z) - ReLU Regression with Massart Noise [52.10842036932169]
We study the fundamental problem of ReLU regression, where the goal is to fit Rectified Linear Units (ReLUs) to data.
We focus on ReLU regression in the Massart noise model, a natural and well-studied semi-random noise model.
We develop an efficient algorithm that achieves exact parameter recovery in this model.
arXiv Detail & Related papers (2021-09-10T02:13:22Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Approximation Schemes for ReLU Regression [80.33702497406632]
We consider the fundamental problem of ReLU regression.
The goal is to output the best fitting ReLU with respect to square loss given to draws from some unknown distribution.
arXiv Detail & Related papers (2020-05-26T16:26:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.