A Nearly Tight Bound for Fitting an Ellipsoid to Gaussian Random Points
- URL: http://arxiv.org/abs/2212.11221v1
- Date: Wed, 21 Dec 2022 17:48:01 GMT
- Title: A Nearly Tight Bound for Fitting an Ellipsoid to Gaussian Random Points
- Authors: Daniel M. Kane and Ilias Diakonikolas
- Abstract summary: This nearly establishes a conjecture ofciteSaundersonCPW12, within logarithmic factors.
The latter conjecture has attracted significant attention over the past decade, due to its connections to machine learning and sum-of-squares lower bounds for certain statistical problems.
- Score: 50.90125395570797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We prove that for $c>0$ a sufficiently small universal constant that a random
set of $c d^2/\log^4(d)$ independent Gaussian random points in $\mathbb{R}^d$
lie on a common ellipsoid with high probability. This nearly establishes a
conjecture of~\cite{SaundersonCPW12}, within logarithmic factors. The latter
conjecture has attracted significant attention over the past decade, due to its
connections to machine learning and sum-of-squares lower bounds for certain
statistical problems.
Related papers
- SoS Certifiability of Subgaussian Distributions and its Algorithmic Applications [37.208622097149714]
We prove that there is a universal constant $C>0$ so that for every $d inmathbb N$, every centered subgaussian distribution $mathcal D$ on $mathbb Rd$, and every even $p inmathbb N$, the $d-optimal inmathbb N$, and the $d-optimal inmathbb N$.
This establishes that every subgaussian distribution is emphS-certifiably subgaussian -- a condition that yields efficient learning algorithms for a wide variety of
arXiv Detail & Related papers (2024-10-28T16:36:58Z) - Fitting an ellipsoid to a quadratic number of random points [10.208117253395342]
We consider the problem $(mathrmP)$ of fitting $n$ standard Gaussian random vectors in $mathbbRd$ to the boundary of a centered ellipsoid, as $n, d to infty$.
This problem is conjectured to have a sharp feasibility transition: for any $varepsilon > 0$, if $n leq (1 - varepsilon) d2 / 4$ then $(mathrmP)$ has a solution with high probability.
arXiv Detail & Related papers (2023-07-03T17:46:23Z) - Strong uniform convergence of Laplacians of random geometric and
directed kNN graphs on compact manifolds [0.0]
We study the almost sure uniform convergence of this operator to the diffusive Laplace-Beltrami operator when $n$ tends to infinity.
This work extends known results of the past 15 years.
arXiv Detail & Related papers (2022-12-20T14:31:06Z) - Near-optimal fitting of ellipsoids to random points [68.12685213894112]
A basic problem of fitting an ellipsoid to random points has connections to low-rank matrix decompositions, independent component analysis, and principal component analysis.
We resolve this conjecture up to logarithmic factors by constructing a fitting ellipsoid for some $n = Omega(, d2/mathrmpolylog(d),)$.
Our proof demonstrates feasibility of the least squares construction of Saunderson et al. using a convenient decomposition of a certain non-standard random matrix.
arXiv Detail & Related papers (2022-08-19T18:00:34Z) - On the Multidimensional Random Subset Sum Problem [0.9007371440329465]
In the Random Subset Sum Problem, given $n$ i.i.d. random variables $X_1,..., X_n$, we wish to approximate any point $z in [-1,1]$ as the sum of a subset $X_i_1(z),..., X_i_s(z)$ of them, up to error $varepsilon cdot.
We prove that, in $d$ dimensions, $n = O(d3log frac 1varepsilon cdot
arXiv Detail & Related papers (2022-07-28T08:10:43Z) - A Law of Robustness beyond Isoperimetry [84.33752026418045]
We prove a Lipschitzness lower bound $Omega(sqrtn/p)$ of robustness of interpolating neural network parameters on arbitrary distributions.
We then show the potential benefit of overparametrization for smooth data when $n=mathrmpoly(d)$.
We disprove the potential existence of an $O(1)$-Lipschitz robust interpolating function when $n=exp(omega(d))$.
arXiv Detail & Related papers (2022-02-23T16:10:23Z) - Random matrices in service of ML footprint: ternary random features with
no performance loss [55.30329197651178]
We show that the eigenspectrum of $bf K$ is independent of the distribution of the i.i.d. entries of $bf w$.
We propose a novel random technique, called Ternary Random Feature (TRF)
The computation of the proposed random features requires no multiplication and a factor of $b$ less bits for storage compared to classical random features.
arXiv Detail & Related papers (2021-10-05T09:33:49Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Robustly Learning any Clusterable Mixture of Gaussians [55.41573600814391]
We study the efficient learnability of high-dimensional Gaussian mixtures in the adversarial-robust setting.
We provide an algorithm that learns the components of an $epsilon$-corrupted $k$-mixture within information theoretically near-optimal error proofs of $tildeO(epsilon)$.
Our main technical contribution is a new robust identifiability proof clusters from a Gaussian mixture, which can be captured by the constant-degree Sum of Squares proof system.
arXiv Detail & Related papers (2020-05-13T16:44:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.