Continuous LWE is as Hard as LWE & Applications to Learning Gaussian
Mixtures
- URL: http://arxiv.org/abs/2204.02550v1
- Date: Wed, 6 Apr 2022 03:03:39 GMT
- Title: Continuous LWE is as Hard as LWE & Applications to Learning Gaussian
Mixtures
- Authors: Aparna Gupte, Neekon Vafa, Vinod Vaikuntanathan
- Abstract summary: We show direct and conceptually simple reductions between the classical learning with errors problem and its continuous analog, CLWE.
This allows us to bring to bear the powerful of LWE-based cryptography to the applications of CLWE.
- Score: 11.348971335676444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We show direct and conceptually simple reductions between the classical
learning with errors (LWE) problem and its continuous analog, CLWE (Bruna,
Regev, Song and Tang, STOC 2021). This allows us to bring to bear the powerful
machinery of LWE-based cryptography to the applications of CLWE. For example,
we obtain the hardness of CLWE under the classical worst-case hardness of the
gap shortest vector problem. Previously, this was known only under quantum
worst-case hardness of lattice problems. More broadly, with our reductions
between the two problems, any future developments to LWE will also apply to
CLWE and its downstream applications.
As a concrete application, we show an improved hardness result for density
estimation for mixtures of Gaussians. In this computational problem, given
sample access to a mixture of Gaussians, the goal is to output a function that
estimates the density function of the mixture. Under the (plausible and widely
believed) exponential hardness of the classical LWE problem, we show that
Gaussian mixture density estimation in $\mathbb{R}^n$ with roughly $\log n$
Gaussian components given $\mathsf{poly}(n)$ samples requires time
quasi-polynomial in $n$. Under the (conservative) polynomial hardness of LWE,
we show hardness of density estimation for $n^{\epsilon}$ Gaussians for any
constant $\epsilon > 0$, which improves on Bruna, Regev, Song and Tang (STOC
2021), who show hardness for at least $\sqrt{n}$ Gaussians under polynomial
(quantum) hardness assumptions.
Our key technical tool is a reduction from classical LWE to LWE with
$k$-sparse secrets where the multiplicative increase in the noise is only
$O(\sqrt{k})$, independent of the ambient dimension $n$.
Related papers
- Convergence Rate Analysis of LION [54.28350823319057]
LION converges iterations of $cal(sqrtdK-)$ measured by gradient Karush-Kuhn-T (sqrtdK-)$.
We show that LION can achieve lower loss and higher performance compared to standard SGD.
arXiv Detail & Related papers (2024-11-12T11:30:53Z) - Learning general Gaussian mixtures with efficient score matching [16.06356123715737]
We study the problem of learning mixtures of $k$ Gaussians in $d$ dimensions.
We make no separation assumptions on the underlying mixture components.
We give an algorithm that draws $dmathrmpoly(k/varepsilon)$ samples from the target mixture, runs in sample-polynomial time, and constructs a sampler.
arXiv Detail & Related papers (2024-04-29T17:30:36Z) - Tempered Calculus for ML: Application to Hyperbolic Model Embedding [70.61101116794549]
Most mathematical distortions used in ML are fundamentally integral in nature.
In this paper, we unveil a grounded theory and tools which can help improve these distortions to better cope with ML requirements.
We show how to apply it to a problem that has recently gained traction in ML: hyperbolic embeddings with a "cheap" and accurate encoding along the hyperbolic vsean scale.
arXiv Detail & Related papers (2024-02-06T17:21:06Z) - LWE with Quantum Amplitudes: Algorithm, Hardness, and Oblivious Sampling [6.219791262322396]
We show new algorithms, hardness results and applications for $sfS|LWErangle$ and $sfC|LWErangle$ with real Gaussian, Gaussian with linear or quadratic phase terms, and other related amplitudes.
arXiv Detail & Related papers (2023-10-01T11:53:24Z) - Mixtures of Gaussians are Privately Learnable with a Polynomial Number of Samples [9.649879910148854]
We study the problem of estimating mixtures of Gaussians under the constraint of differential privacy (DP)
Our main result is that $textpoly(k,d,1/alpha,1/varepsilon,log (1/delta))$ samples are sufficient to estimate a mixture of $k$ Gaussians in $mathbbRd$ up to total variation distance $alpha$.
This is the first finite sample complexity upper bound for the problem that does not make any structural assumptions on the GMMs.
arXiv Detail & Related papers (2023-09-07T17:02:32Z) - Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces
and ReLU Regression under Gaussian Marginals [43.0867217287089]
We study the task of agnostically learning halfspaces under the Gaussian distribution.
We prove a near-optimal computational hardness result for this task.
arXiv Detail & Related papers (2023-02-13T16:46:23Z) - Robustly Learning Mixtures of $k$ Arbitrary Gaussians [47.40835932474677]
We give a-time algorithm for the problem of robustly estimating a mixture of $k$ arbitrary Gaussians in $mathbbRd$, for any fixed $k$, in the presence of a constant fraction of arbitrary corruptions.
Our main tools are an efficient emphpartial clustering algorithm that relies on the sum-of-squares method, and a novel tensor decomposition algorithm that allows errors in both Frobenius norm and low-rank terms.
arXiv Detail & Related papers (2020-12-03T17:54:03Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Continuous LWE [32.345218864470446]
We introduce a continuous analogue of the Learning with Errors (LWE) problem, which we name CLWE.
We give a quantum reduction from worst-case lattice problems to CLWE, showing that CLWE enjoys similar hardness guarantees to those of LWE.
arXiv Detail & Related papers (2020-05-19T17:16:12Z) - Robustly Learning any Clusterable Mixture of Gaussians [55.41573600814391]
We study the efficient learnability of high-dimensional Gaussian mixtures in the adversarial-robust setting.
We provide an algorithm that learns the components of an $epsilon$-corrupted $k$-mixture within information theoretically near-optimal error proofs of $tildeO(epsilon)$.
Our main technical contribution is a new robust identifiability proof clusters from a Gaussian mixture, which can be captured by the constant-degree Sum of Squares proof system.
arXiv Detail & Related papers (2020-05-13T16:44:12Z) - Randomized Exploration in Generalized Linear Bandits [56.05007606177762]
We study two randomized algorithms for generalized linear bandits.
The first, GLM-TSL, samples a generalized linear model (GLM) from the Laplace approximation to the posterior distribution.
The second, GLM-FPL, fits a GLM to a randomly perturbed history of past rewards.
arXiv Detail & Related papers (2019-06-21T04:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.