Kernel Learning in Ridge Regression "Automatically" Yields Exact Low
Rank Solution
- URL: http://arxiv.org/abs/2310.11736v2
- Date: Mon, 27 Nov 2023 20:30:54 GMT
- Title: Kernel Learning in Ridge Regression "Automatically" Yields Exact Low
Rank Solution
- Authors: Yunlu Chen, Yang Li, Keli Liu, and Feng Ruan
- Abstract summary: We consider kernels of the form $(x,x') mapsto phi(|x-x'|2_Sigma)$ parametrized by $Sigma$.
We find that the global minimizer of the finite sample kernel learning objective is also low rank with high probability.
- Score: 6.109362130047454
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider kernels of the form $(x,x') \mapsto \phi(\|x-x'\|^2_\Sigma)$
parametrized by $\Sigma$. For such kernels, we study a variant of the kernel
ridge regression problem which simultaneously optimizes the prediction function
and the parameter $\Sigma$ of the reproducing kernel Hilbert space. The
eigenspace of the $\Sigma$ learned from this kernel ridge regression problem
can inform us which directions in covariate space are important for prediction.
Assuming that the covariates have nonzero explanatory power for the response
only through a low dimensional subspace (central mean subspace), we find that
the global minimizer of the finite sample kernel learning objective is also low
rank with high probability. More precisely, the rank of the minimizing $\Sigma$
is with high probability bounded by the dimension of the central mean subspace.
This phenomenon is interesting because the low rankness property is achieved
without using any explicit regularization of $\Sigma$, e.g., nuclear norm
penalization.
Our theory makes correspondence between the observed phenomenon and the
notion of low rank set identifiability from the optimization literature. The
low rankness property of the finite sample solutions exists because the
population kernel learning objective grows "sharply" when moving away from its
minimizers in any direction perpendicular to the central mean subspace.
Related papers
- Bilinear Convolution Decomposition for Causal RL Interpretability [0.0]
Efforts to interpret reinforcement learning (RL) models often rely on high-level techniques such as attribution or probing.
This work proposes replacing nonlinearities in convolutional neural networks (ConvNets) with bilinear variants, to produce a class of models for which these limitations can be addressed.
We show bilinear model variants perform comparably in model-free reinforcement learning settings, and give a side by side comparison on ProcGen environments.
arXiv Detail & Related papers (2024-12-01T19:32:04Z) - Towards understanding epoch-wise double descent in two-layer linear neural networks [11.210628847081097]
We study epoch-wise double descent in two-layer linear neural networks.
We identify additional factors of epoch-wise double descent emerging with the extra model layer.
This opens up for further questions regarding unidentified factors of epoch-wise double descent for truly deep models.
arXiv Detail & Related papers (2024-07-13T10:45:21Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - State-space Models with Layer-wise Nonlinearity are Universal
Approximators with Exponential Decaying Memory [0.0]
We show that stacking state-space models with layer-wise nonlinear activation is sufficient to approximate any continuous sequence-to-sequence relationship.
Our findings demonstrate that the addition of layer-wise nonlinear activation enhances the model's capacity to learn complex sequence patterns.
arXiv Detail & Related papers (2023-09-23T15:55:12Z) - Exploring Linear Feature Disentanglement For Neural Networks [63.20827189693117]
Non-linear activation functions, e.g., Sigmoid, ReLU, and Tanh, have achieved great success in neural networks (NNs)
Due to the complex non-linear characteristic of samples, the objective of those activation functions is to project samples from their original feature space to a linear separable feature space.
This phenomenon ignites our interest in exploring whether all features need to be transformed by all non-linear functions in current typical NNs.
arXiv Detail & Related papers (2022-03-22T13:09:17Z) - Non-linear manifold ROM with Convolutional Autoencoders and Reduced
Over-Collocation method [0.0]
Non-affine parametric dependencies, nonlinearities and advection-dominated regimes of the model of interest can result in a slow Kolmogorov n-width decay.
We implement the non-linear manifold method introduced by Carlberg et al [37] with hyper-reduction achieved through reduced over-collocation and teacher-student training of a reduced decoder.
We test the methodology on a 2d non-linear conservation law and a 2d shallow water models, and compare the results obtained with a purely data-driven method for which the dynamics is evolved in time with a long-short term memory network
arXiv Detail & Related papers (2022-03-01T11:16:50Z) - Multi-scale Feature Learning Dynamics: Insights for Double Descent [71.91871020059857]
We study the phenomenon of "double descent" of the generalization error.
We find that double descent can be attributed to distinct features being learned at different scales.
arXiv Detail & Related papers (2021-12-06T18:17:08Z) - Nonlinear proper orthogonal decomposition for convection-dominated flows [0.0]
We propose an end-to-end Galerkin-free model combining autoencoders with long short-term memory networks for dynamics.
Our approach not only improves the accuracy, but also significantly reduces the computational cost of training and testing.
arXiv Detail & Related papers (2021-10-15T18:05:34Z) - The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer
Linear Networks [51.1848572349154]
neural network models that perfectly fit noisy data can generalize well to unseen test data.
We consider interpolating two-layer linear neural networks trained with gradient flow on the squared loss and derive bounds on the excess risk.
arXiv Detail & Related papers (2021-08-25T22:01:01Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.