Universality of max-margin classifiers
- URL: http://arxiv.org/abs/2310.00176v1
- Date: Fri, 29 Sep 2023 22:45:56 GMT
- Title: Universality of max-margin classifiers
- Authors: Andrea Montanari, Feng Ruan, Basil Saeed, Youngtak Sohn
- Abstract summary: We study the role of featurization maps and the high-dimensional universality of the misclassification error for non-Gaussian features.
In particular, the overparametrization threshold and generalization error can be computed within a simpler model.
- Score: 10.797131009370219
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Maximum margin binary classification is one of the most fundamental
algorithms in machine learning, yet the role of featurization maps and the
high-dimensional asymptotics of the misclassification error for non-Gaussian
features are still poorly understood. We consider settings in which we observe
binary labels $y_i$ and either $d$-dimensional covariates ${\boldsymbol z}_i$
that are mapped to a $p$-dimension space via a randomized featurization map
${\boldsymbol \phi}:\mathbb{R}^d \to\mathbb{R}^p$, or $p$-dimensional features
of non-Gaussian independent entries. In this context, we study two fundamental
questions: $(i)$ At what overparametrization ratio $p/n$ do the data become
linearly separable? $(ii)$ What is the generalization error of the max-margin
classifier?
Working in the high-dimensional regime in which the number of features $p$,
the number of samples $n$ and the input dimension $d$ (in the nonlinear
featurization setting) diverge, with ratios of order one, we prove a
universality result establishing that the asymptotic behavior is completely
determined by the expected covariance of feature vectors and by the covariance
between features and labels. In particular, the overparametrization threshold
and generalization error can be computed within a simpler Gaussian model.
The main technical challenge lies in the fact that max-margin is not the
maximizer (or minimizer) of an empirical average, but the maximizer of a
minimum over the samples. We address this by representing the classifier as an
average over support vectors. Crucially, we find that in high dimensions, the
support vector count is proportional to the number of samples, which ultimately
yields universality.
Related papers
- Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Repeated Observations for Classification [0.2676349883103404]
We study the problem nonparametric classification with repeated observations.
In the analysis, we investigate particular models like robust detection by nominal densities, prototype classification, linear transformation, linear classification, scaling.
arXiv Detail & Related papers (2023-07-19T10:50:36Z) - Dimension free ridge regression [10.434481202633458]
We revisit ridge regression on i.i.d. data in terms of the bias and variance of ridge regression in terms of the bias and variance of an equivalent' sequence model.
As a new application, we obtain a completely explicit and sharp characterization of ridge regression for Hilbert covariates with regularly varying spectrum.
arXiv Detail & Related papers (2022-10-16T16:01:05Z) - Approximate Function Evaluation via Multi-Armed Bandits [51.146684847667125]
We study the problem of estimating the value of a known smooth function $f$ at an unknown point $boldsymbolmu in mathbbRn$, where each component $mu_i$ can be sampled via a noisy oracle.
We design an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least $1-delta$ returns an $epsilon$ accurate estimate of $f(boldsymbolmu)$.
arXiv Detail & Related papers (2022-03-18T18:50:52Z) - Universality of empirical risk minimization [12.764655736673749]
Consider supervised learning from i.i.d. samples where $boldsymbol x_i inmathbbRp$ are feature vectors and $y in mathbbR$ are labels.
We study empirical risk universality over a class of functions that are parameterized by $mathsfk.
arXiv Detail & Related papers (2022-02-17T18:53:45Z) - Classification of high-dimensional data with spiked covariance matrix
structure [0.2741266294612775]
We study the classification problem for high-dimensional data with $n$ observations on $p$ features.
We propose an adaptive classifier that first performs dimension reduction on the feature vectors prior to classification in the dimensionally reduced space.
We show that the resulting classifier is Bayes optimal whenever $n rightarrow infty$ and $s sqrtn-1 ln p rightarrow 0$.
arXiv Detail & Related papers (2021-10-05T11:26:53Z) - Random matrices in service of ML footprint: ternary random features with
no performance loss [55.30329197651178]
We show that the eigenspectrum of $bf K$ is independent of the distribution of the i.i.d. entries of $bf w$.
We propose a novel random technique, called Ternary Random Feature (TRF)
The computation of the proposed random features requires no multiplication and a factor of $b$ less bits for storage compared to classical random features.
arXiv Detail & Related papers (2021-10-05T09:33:49Z) - Spectral properties of sample covariance matrices arising from random
matrices with independent non identically distributed columns [50.053491972003656]
It was previously shown that the functionals $texttr(AR(z))$, for $R(z) = (frac1nXXT- zI_p)-1$ and $Ain mathcal M_p$ deterministic, have a standard deviation of order $O(|A|_* / sqrt n)$.
Here, we show that $|mathbb E[R(z)] - tilde R(z)|_F
arXiv Detail & Related papers (2021-09-06T14:21:43Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Consistent Structured Prediction with Max-Min Margin Markov Networks [84.60515484036239]
Max-margin methods for binary classification have been extended to the structured prediction setting under the name of max-margin Markov networks ($M3N$)
We overcome such limitations by defining the learning problem in terms of a "max-min" margin formulation, naming the resulting method max-min margin Markov networks ($M4N$)
Experiments on multi-class classification, ordinal regression, sequence prediction and ranking demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-07-02T10:48:42Z) - The generalization error of max-margin linear classifiers: Benign
overfitting and high dimensional asymptotics in the overparametrized regime [11.252856459394854]
Modern machine learning classifiers often exhibit vanishing classification error on the training set.
Motivated by these phenomena, we revisit high-dimensional maximum margin classification for linearly separable data.
arXiv Detail & Related papers (2019-11-05T00:15:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.