Polynomial-time sparse measure recovery
- URL: http://arxiv.org/abs/2204.07879v1
- Date: Sat, 16 Apr 2022 22:12:55 GMT
- Title: Polynomial-time sparse measure recovery
- Authors: Hadi Daneshmand and Francis Bach
- Abstract summary: We propose the first poly-time recovery method from carefully designed moments.
This method relies on the recovery of a planted two-layer neural network with two-dimensional inputs, a finite width, and zero-one activation.
- Score: 2.0951285993543642
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: How to recover a probability measure with sparse support from particular
moments? This problem has been the focus of research in theoretical computer
science and neural computing. However, there is no polynomial-time algorithm
for the recovery. The best algorithm for the recovery requires
$O(2^{\text{poly}(1/\epsilon)})$ for $\epsilon$-accurate recovery. We propose
the first poly-time recovery method from carefully designed moments that only
requires $O(\log(1/\epsilon)/\epsilon^2)$ computations for an
$\epsilon$-accurate recovery. This method relies on the recovery of a planted
two-layer neural network with two-dimensional inputs, a finite width, and
zero-one activation. For such networks, we establish the first global
convergence of gradient descent and demonstrate its application in sparse
measure recovery.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Polynomial-Time Solutions for ReLU Network Training: A Complexity
Classification via Max-Cut and Zonotopes [70.52097560486683]
We prove that the hardness of approximation of ReLU networks not only mirrors the complexity of the Max-Cut problem but also, in certain special cases, exactly corresponds to it.
In particular, when $epsilonleqsqrt84/83-1approx 0.006$, we show that it is NP-hard to find an approximate global dataset of the ReLU network objective with relative error $epsilon$ with respect to the objective value.
arXiv Detail & Related papers (2023-11-18T04:41:07Z) - Statistical-Computational Tradeoffs in Mixed Sparse Linear Regression [20.00109111254507]
We show that the problem suffers from a $frackSNR2$-to-$frack2SNR2$ statistical-to-computational gap.
We also analyze a simple thresholding algorithm which, outside of the narrow regime where the problem is hard, solves the associated mixed regression detection problem.
arXiv Detail & Related papers (2023-03-03T18:03:49Z) - Detection-Recovery Gap for Planted Dense Cycles [72.4451045270967]
We consider a model where a dense cycle with expected bandwidth $n tau$ and edge density $p$ is planted in an ErdHos-R'enyi graph $G(n,q)$.
We characterize the computational thresholds for the associated detection and recovery problems for the class of low-degree algorithms.
arXiv Detail & Related papers (2023-02-13T22:51:07Z) - Refined Regret for Adversarial MDPs with Linear Function Approximation [50.00022394876222]
We consider learning in an adversarial Decision Process (MDP) where the loss functions can change arbitrarily over $K$ episodes.
This paper provides two algorithms that improve the regret to $tildemathcal O(K2/3)$ in the same setting.
arXiv Detail & Related papers (2023-01-30T14:37:21Z) - Improved Support Recovery in Universal One-bit Compressed Sensing [37.5349071806395]
One-bit compressed (1bCS) is an extremely quantized signal acquisition method.
The focus of this paper is support recovery, which often also computationally facilitate approximate signal recovery.
We show that the first type of recovery is possible with $tildeO(k/epsilon)$ measurements, while the later type of recovery, more challenging, is possible with $tildeO(maxk/epsilon,k3/2)$ measurements.
arXiv Detail & Related papers (2022-10-29T17:43:14Z) - Semi-Random Sparse Recovery in Nearly-Linear Time [37.61139884826181]
We investigate the brittleness of fast sparse recovery algorithms to generative model changes.
Our approach differs from prior fast iterative methods with provable guarantees under semi-random generative models.
We design a new iterative method tailored to the geometry of sparse recovery which is provably robust to our semi-random model.
arXiv Detail & Related papers (2022-03-08T10:56:46Z) - Finite-Time Analysis for Double Q-learning [50.50058000948908]
We provide the first non-asymptotic, finite-time analysis for double Q-learning.
We show that both synchronous and asynchronous double Q-learning are guaranteed to converge to an $epsilon$-accurate neighborhood of the global optimum.
arXiv Detail & Related papers (2020-09-29T18:48:21Z) - Training (Overparametrized) Neural Networks in Near-Linear Time [21.616949485102342]
We show how to speed up the algorithm of [CGH+1] for training (mildly overetrized) ReparamLU networks.
The centerpiece of our algorithm is to reformulate the Gauss-Newton as an $ell$-recondition.
arXiv Detail & Related papers (2020-06-20T20:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.