Optimal Rates of Kernel Ridge Regression under Source Condition in Large
Dimensions
- URL: http://arxiv.org/abs/2401.01270v1
- Date: Tue, 2 Jan 2024 16:14:35 GMT
- Title: Optimal Rates of Kernel Ridge Regression under Source Condition in Large
Dimensions
- Authors: Haobo Zhang, Yicheng Li, Weihao Lu, Qian Lin
- Abstract summary: We study the large-dimensional behavior of kernel ridge regression (KRR) where the sample size $n asymp dgamma$ for some $gamma > 0$.
Our results illustrate that the curves of rate varying along $gamma$ exhibit the periodic plateau behavior and the multiple descent behavior.
- Score: 15.988264513040903
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by the studies of neural networks (e.g.,the neural tangent kernel
theory), we perform a study on the large-dimensional behavior of kernel ridge
regression (KRR) where the sample size $n \asymp d^{\gamma}$ for some $\gamma >
0$. Given an RKHS $\mathcal{H}$ associated with an inner product kernel defined
on the sphere $\mathbb{S}^{d}$, we suppose that the true function $f_{\rho}^{*}
\in [\mathcal{H}]^{s}$, the interpolation space of $\mathcal{H}$ with source
condition $s>0$. We first determined the exact order (both upper and lower
bound) of the generalization error of kernel ridge regression for the optimally
chosen regularization parameter $\lambda$. We then further showed that when
$0<s\le1$, KRR is minimax optimal; and when $s>1$, KRR is not minimax optimal
(a.k.a. he saturation effect). Our results illustrate that the curves of rate
varying along $\gamma$ exhibit the periodic plateau behavior and the multiple
descent behavior and show how the curves evolve with $s>0$. Interestingly, our
work provides a unified viewpoint of several recent works on kernel regression
in the large-dimensional setting, which correspond to $s=0$ and $s=1$
respectively.
Related papers
- A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Optimal Rate of Kernel Regression in Large Dimensions [13.641780902673792]
We first build a general tool to characterize the upper bound and the minimax lower bound of kernel regression for large dimensional data.
We utilize the new tool to show that the minimax rate of the excess risk of kernel regression is $n-1/2$ when $nasymp dgamma$ for $gamma =2, 4, 6, 8, cdots$.
arXiv Detail & Related papers (2023-09-08T11:29:05Z) - On the Optimality of Misspecified Kernel Ridge Regression [13.995944403996566]
We show that KRR is minimax optimal for any $sin (0,1)$ when the $mathcalH$ is a Sobolev RKHS.
arXiv Detail & Related papers (2023-05-12T04:12:12Z) - Detection of Dense Subhypergraphs by Low-Degree Polynomials [72.4451045270967]
Detection of a planted dense subgraph in a random graph is a fundamental statistical and computational problem.
We consider detecting the presence of a planted $Gr(ngamma, n-alpha)$ subhypergraph in a $Gr(n, n-beta) hypergraph.
Our results are already new in the graph case $r=2$, as we consider the subtle log-density regime where hardness based on average-case reductions is not known.
arXiv Detail & Related papers (2023-04-17T10:38:08Z) - High-dimensional Asymptotics of Feature Learning: How One Gradient Step
Improves the Representation [89.21686761957383]
We study the first gradient descent step on the first-layer parameters $boldsymbolW$ in a two-layer network.
Our results demonstrate that even one step can lead to a considerable advantage over random features.
arXiv Detail & Related papers (2022-05-03T12:09:59Z) - Random matrices in service of ML footprint: ternary random features with
no performance loss [55.30329197651178]
We show that the eigenspectrum of $bf K$ is independent of the distribution of the i.i.d. entries of $bf w$.
We propose a novel random technique, called Ternary Random Feature (TRF)
The computation of the proposed random features requires no multiplication and a factor of $b$ less bits for storage compared to classical random features.
arXiv Detail & Related papers (2021-10-05T09:33:49Z) - Minimax Optimal Regression over Sobolev Spaces via Laplacian
Regularization on Neighborhood Graphs [25.597646488273558]
We study the statistical properties of Laplacian smoothing, a graph-based approach to nonparametric regression.
We prove that Laplacian smoothing is manifold-adaptive.
arXiv Detail & Related papers (2021-06-03T01:20:41Z) - Generalization error of random features and kernel methods:
hypercontractivity and kernel matrix concentration [19.78800773518545]
We study the use of random features methods in conjunction with ridge regression in the feature space $mathbb RN$.
This can be viewed as a finite-dimensional approximation of kernel ridge regression (KRR), or as a stylized model for neural networks in the so called lazy training regime.
arXiv Detail & Related papers (2021-01-26T06:46:41Z) - Convergence of Graph Laplacian with kNN Self-tuned Kernels [14.645468999921961]
Self-tuned kernel adaptively sets a $sigma_i$ at each point $x_i$ by the $k$-nearest neighbor (kNN) distance.
This paper proves the convergence of graph Laplacian operator $L_N$ to manifold (weighted-)Laplacian for a new family of kNN self-tuned kernels.
arXiv Detail & Related papers (2020-11-03T04:55:33Z) - Agnostic Learning of a Single Neuron with Gradient Descent [92.7662890047311]
We consider the problem of learning the best-fitting single neuron as measured by the expected square loss.
For the ReLU activation, our population risk guarantee is $O(mathsfOPT1/2)+epsilon$.
For the ReLU activation, our population risk guarantee is $O(mathsfOPT1/2)+epsilon$.
arXiv Detail & Related papers (2020-05-29T07:20:35Z) - Curse of Dimensionality on Randomized Smoothing for Certifiable
Robustness [151.67113334248464]
We show that extending the smoothing technique to defend against other attack models can be challenging.
We present experimental results on CIFAR to validate our theory.
arXiv Detail & Related papers (2020-02-08T22:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.