High-dimensional ridge regression with random features for non-identically distributed data with a variance profile
- URL: http://arxiv.org/abs/2504.03035v1
- Date: Thu, 03 Apr 2025 21:20:08 GMT
- Title: High-dimensional ridge regression with random features for non-identically distributed data with a variance profile
- Authors: Issa-Mbenard Dabo, Jérémie Bigot,
- Abstract summary: The behavior of the random feature model in the high-dimensional regression framework has become a popular issue of interest in the machine learning literature.<n>We study the performances of the random features model in the setting of non-iid feature vectors.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The behavior of the random feature model in the high-dimensional regression framework has become a popular issue of interest in the machine learning literature}. This model is generally considered for feature vectors $x_i = \Sigma^{1/2} x_i'$, where $x_i'$ is a random vector made of independent and identically distributed (iid) entries, and $\Sigma$ is a positive definite matrix representing the covariance of the features. In this paper, we move beyond {\CB this standard assumption by studying the performances of the random features model in the setting of non-iid feature vectors}. Our approach is related to the analysis of the spectrum of large random matrices through random matrix theory (RMT) {\CB and free probability} results. We turn to the analysis of non-iid data by using the notion of variance profile {\CB which} is {\CB well studied in RMT.} Our main contribution is then the study of the limits of the training and {\CB prediction} risks associated to the ridge estimator in the random features model when its dimensions grow. We provide asymptotic equivalents of these risks that capture the behavior of ridge regression with random features in a {\CB high-dimensional} framework. These asymptotic equivalents, {\CB which prove to be sharp in numerical experiments}, are retrieved by adapting, to our setting, established results from operator-valued free probability theory. Moreover, {\CB for various classes of random feature vectors that have not been considered so far in the literature}, our approach allows to show the appearance of the double descent phenomenon when the ridge regularization parameter is small enough.
Related papers
- Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - High-dimensional analysis of ridge regression for non-identically distributed data with a variance profile [0.0]
We study the predictive risk of the ridge estimator for linear regression with a variance profile.<n>For certain class of variance profile, our work highlights the emergence of the well-known double descent phenomenon.<n>We also investigate the similarities and differences that exist with the standard setting of independent and identically distributed data.
arXiv Detail & Related papers (2024-03-29T14:24:49Z) - Analysing heavy-tail properties of Stochastic Gradient Descent by means of Stochastic Recurrence Equations [0.0]
In recent works, it has been observed that heavy tail properties of Gradient Descent (SGD) can be studied in the probabilistic framework of recursions.
We will answer several open questions of the quoted paper and extend their results by applying the theory of irreducibleproximal (i-p) matrices.
arXiv Detail & Related papers (2024-03-20T13:39:19Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Simplex Random Features [53.97976744884616]
We present Simplex Random Features (SimRFs), a new random feature (RF) mechanism for unbiased approximation of the softmax and Gaussian kernels.
We prove that SimRFs provide the smallest possible mean square error (MSE) on unbiased estimates of these kernels.
We show consistent gains provided by SimRFs in settings including pointwise kernel estimation, nonparametric classification and scalable Transformers.
arXiv Detail & Related papers (2023-01-31T18:53:39Z) - Generative Principal Component Analysis [47.03792476688768]
We study the problem of principal component analysis with generative modeling assumptions.
Key assumption is that the underlying signal lies near the range of an $L$-Lipschitz continuous generative model with bounded $k$-dimensional inputs.
We propose a quadratic estimator, and show that it enjoys a statistical rate of order $sqrtfracklog Lm$, where $m$ is the number of samples.
arXiv Detail & Related papers (2022-03-18T01:48:16Z) - When Random Tensors meet Random Matrices [50.568841545067144]
This paper studies asymmetric order-$d$ spiked tensor models with Gaussian noise.
We show that the analysis of the considered model boils down to the analysis of an equivalent spiked symmetric textitblock-wise random matrix.
arXiv Detail & Related papers (2021-12-23T04:05:01Z) - Test Set Sizing Via Random Matrix Theory [91.3755431537592]
This paper uses techniques from Random Matrix Theory to find the ideal training-testing data split for a simple linear regression.
It defines "ideal" as satisfying the integrity metric, i.e. the empirical model error is the actual measurement noise.
This paper is the first to solve for the training and test size for any model in a way that is truly optimal.
arXiv Detail & Related papers (2021-12-11T13:18:33Z) - Prediction in latent factor regression: Adaptive PCR and beyond [2.9439848714137447]
We prove a master theorem that establishes a risk bound for a large class of predictors.
We use our main theorem to recover known risk bounds for the minimum-norm interpolating predictor.
We conclude with a detailed simulation study to support and complement our theoretical results.
arXiv Detail & Related papers (2020-07-20T12:42:47Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.