Total Stability of SVMs and Localized SVMs
- URL: http://arxiv.org/abs/2101.12678v1
- Date: Fri, 29 Jan 2021 16:44:14 GMT
- Title: Total Stability of SVMs and Localized SVMs
- Authors: Hannes K\"ohler, Andreas Christmann
- Abstract summary: Regularized kernel-based methods such as support vector machines (SVMs) depend on the underlying probability measure $mathrmP$.
The present paper investigates the influence of simultaneous slight variations in the whole triple $(mathrmP,lambda,k)$ on the resulting predictor.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Regularized kernel-based methods such as support vector machines (SVMs)
typically depend on the underlying probability measure $\mathrm{P}$
(respectively an empirical measure $\mathrm{D}_n$ in applications) as well as
on the regularization parameter $\lambda$ and the kernel $k$. Whereas classical
statistical robustness only considers the effect of small perturbations in
$\mathrm{P}$, the present paper investigates the influence of simultaneous
slight variations in the whole triple $(\mathrm{P},\lambda,k)$, respectively
$(\mathrm{D}_n,\lambda_n,k)$, on the resulting predictor. Existing results from
the literature are considerably generalized and improved. In order to also make
them applicable to big data, where regular SVMs suffer from their super-linear
computational requirements, we show how our results can be transferred to the
context of localized learning. Here, the effect of slight variations in the
applied regionalization, which might for example stem from changes in
$\mathrm{P}$ respectively $\mathrm{D}_n$, is considered as well.
Related papers
- Effective Minkowski Dimension of Deep Nonparametric Regression: Function
Approximation and Statistical Theories [70.90012822736988]
Existing theories on deep nonparametric regression have shown that when the input data lie on a low-dimensional manifold, deep neural networks can adapt to intrinsic data structures.
This paper introduces a relaxed assumption that input data are concentrated around a subset of $mathbbRd$ denoted by $mathcalS$, and the intrinsic dimension $mathcalS$ can be characterized by a new complexity notation -- effective Minkowski dimension.
arXiv Detail & Related papers (2023-06-26T17:13:31Z) - Statistical Learning under Heterogeneous Distribution Shift [71.8393170225794]
Ground-truth predictor is additive $mathbbE[mathbfz mid mathbfx,mathbfy] = f_star(mathbfx) +g_star(mathbfy)$.
arXiv Detail & Related papers (2023-02-27T16:34:21Z) - Approximate Function Evaluation via Multi-Armed Bandits [51.146684847667125]
We study the problem of estimating the value of a known smooth function $f$ at an unknown point $boldsymbolmu in mathbbRn$, where each component $mu_i$ can be sampled via a noisy oracle.
We design an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least $1-delta$ returns an $epsilon$ accurate estimate of $f(boldsymbolmu)$.
arXiv Detail & Related papers (2022-03-18T18:50:52Z) - Differentiated uniformization: A new method for inferring Markov chains
on combinatorial state spaces including stochastic epidemic models [0.0]
We provide an analogous algorithm for computing $partialexp!(tQ)theta$.
We estimate monthly infection and recovery rates during the first wave of the COVID-19 pandemic in Austria.
arXiv Detail & Related papers (2021-12-21T03:59:06Z) - Random matrices in service of ML footprint: ternary random features with
no performance loss [55.30329197651178]
We show that the eigenspectrum of $bf K$ is independent of the distribution of the i.i.d. entries of $bf w$.
We propose a novel random technique, called Ternary Random Feature (TRF)
The computation of the proposed random features requires no multiplication and a factor of $b$ less bits for storage compared to classical random features.
arXiv Detail & Related papers (2021-10-05T09:33:49Z) - Spectral properties of sample covariance matrices arising from random
matrices with independent non identically distributed columns [50.053491972003656]
It was previously shown that the functionals $texttr(AR(z))$, for $R(z) = (frac1nXXT- zI_p)-1$ and $Ain mathcal M_p$ deterministic, have a standard deviation of order $O(|A|_* / sqrt n)$.
Here, we show that $|mathbb E[R(z)] - tilde R(z)|_F
arXiv Detail & Related papers (2021-09-06T14:21:43Z) - Truncated Linear Regression in High Dimensions [26.41623833920794]
In truncated linear regression, $(A_i, y_i)_i$ whose dependent variable equals $y_i= A_irm T cdot x* + eta_i$ is some fixed unknown vector of interest.
The goal is to recover $x*$ under some favorable conditions on the $A_i$'s and the noise distribution.
We prove that there exists a computationally and statistically efficient method for recovering $k$-sparse $n$-dimensional vectors $x*$ from $m$ truncated samples.
arXiv Detail & Related papers (2020-07-29T00:31:34Z) - Linear Time Sinkhorn Divergences using Positive Features [51.50788603386766]
Solving optimal transport with an entropic regularization requires computing a $ntimes n$ kernel matrix that is repeatedly applied to a vector.
We propose to use instead ground costs of the form $c(x,y)=-logdotpvarphi(x)varphi(y)$ where $varphi$ is a map from the ground space onto the positive orthant $RRr_+$, with $rll n$.
arXiv Detail & Related papers (2020-06-12T10:21:40Z) - Does generalization performance of $l^q$ regularization learning depend
on $q$? A negative example [19.945160684285003]
$lq$-regularization has been demonstrated to be an attractive technique in machine learning and statistical modeling.
We show that all $lq$ estimators for $0 infty$ attain similar generalization error bounds.
This finding tentatively reveals that, in some modeling contexts, the choice of $q$ might not have a strong impact in terms of the generalization capability.
arXiv Detail & Related papers (2013-07-25T00:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.