Inapproximability of Sparsest Vector in a Real Subspace
- URL: http://arxiv.org/abs/2410.02636v2
- Date: Sun, 27 Oct 2024 07:27:33 GMT
- Title: Inapproximability of Sparsest Vector in a Real Subspace
- Authors: Vijay Bhattiprolu, Euiwoong Lee,
- Abstract summary: We establish strong inapproximability for finding the sparsest nonzero vector in a real subspace.
We show that it is NP-Hard to approximate the sparsest vector in a subspace within any constant factor.
- Score: 1.9943009191774073
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We establish strong inapproximability for finding the sparsest nonzero vector in a real subspace. We show that it is NP-Hard (under randomized reductions) to approximate the sparsest vector in a subspace within any constant factor (or almost polynomial factors in quasipolynomial time). We recover as a corollary state of the art inapproximability for the shortest vector problem (SVP), a foundational problem in lattice based cryptography. Our proof is surprisingly simple, bypassing even the PCP theorem. We are inspired by the homogenization framework from the inapproximability theory of minimum distance problems (MDC) in integer lattices and error correcting codes. We use a combination of (a) \emph{product testing via tensor codes} and (b) \emph{encoding an assignment as a coset of a random code in higher dimensional space} in order to embed non-homogeneous quadratic equations into the sparsest vector problem. (a) is inspired by Austrin and Khot's simplified proof of hardness of MDC over finite fields, and (b) is inspired by Micciancio's semi-derandomization of hardness of SVP. Our reduction involves the challenge of performing (a) over the reals. We prove that tensoring of the kernel of a +1/-1 random matrix furnishes an adequate product test (while still allowing (b)). The proof exposes a connection to Littlewood-Offord theory and relies on a powerful anticoncentration result of Rudelson and Vershynin. Our main motivation in this work is the development of inapproximability theory for problems over the reals. Analytic variants of sparsest vector have connections to small set expansion, quantum separability and polynomial maximization over convex sets, all of which cause similar barriers to inapproximability. The approach we develop could lead to progress on the hardness of some of these problems.
Related papers
- The Sample Complexity of Smooth Boosting and the Tightness of the Hardcore Theorem [53.446980306786095]
Smooth boosters generate distributions that do not place too much weight on any given example.
Originally introduced for their noise-tolerant properties, such boosters have also found applications in differential privacy, mildly, and quantum learning theory.
arXiv Detail & Related papers (2024-09-17T23:09:25Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Improved Hardness Results for Learning Intersections of Halfspaces [2.1393480341554736]
We show strong (and surprisingly simple) lower bounds for weakly learning intersections of halfspaces in the improper setting.
We significantly narrow this gap by showing that even learning $omega(log log N)$ halfspaces in $N$ takes super-polynomial time.
Specifically, we show that for any $k$ halfspaces in dimension $N$ requires accuracy $N-Omega(k)$, or exponentially many queries.
arXiv Detail & Related papers (2024-02-25T05:26:35Z) - SKI to go Faster: Accelerating Toeplitz Neural Networks via Asymmetric
Kernels [69.47358238222586]
Toeplitz Neural Networks (TNNs) are a recent sequence model with impressive results.
We aim to reduce O(n) computational complexity and O(n) relative positional encoder (RPE) multi-layer perceptron (MLP) and decay bias calls.
For bidirectional models, this motivates a sparse plus low-rank Toeplitz matrix decomposition.
arXiv Detail & Related papers (2023-05-15T21:25:35Z) - Pseudonorm Approachability and Applications to Regret Minimization [73.54127663296906]
We convert high-dimensional $ell_infty$-approachability problems to low-dimensional pseudonorm approachability problems.
We develop an algorithmic theory of pseudonorm approachability, analogous to previous work on approachability for $ell$ and other norms.
arXiv Detail & Related papers (2023-02-03T03:19:14Z) - Minimizing low-rank models of high-order tensors: Hardness, span, tight
relaxation, and applications [26.602371644194143]
We show that this fundamental tensor problem is NP-hard for any tensor rank higher than one.
For low-enough ranks, the proposed continuous reformulation is amenable to low-complexity gradient-based optimization.
We demonstrate promising results on a number of hard problems, including low density parity check codes and general decoding parity check codes.
arXiv Detail & Related papers (2022-10-16T11:53:42Z) - Hardness of Agnostically Learning Halfspaces from Worst-Case Lattice
Problems [0.49495085874952904]
In particular, we show that under this assumption there is no efficient algorithm that outputs any binary hypothesis, not necessarily a halfspace.
It is inspired by a sequence of recent works showing hardness of learning well-separated Gaussian mixtures based on worst-case lattice problems.
arXiv Detail & Related papers (2022-07-28T11:44:39Z) - Optimal Gradient-based Algorithms for Non-concave Bandit Optimization [76.57464214864756]
This work considers a large family of bandit problems where the unknown underlying reward function is non-concave.
Our algorithms are based on a unified zeroth-order optimization paradigm that applies in great generality.
We show that the standard optimistic algorithms are sub-optimal by dimension factors.
arXiv Detail & Related papers (2021-07-09T16:04:24Z) - Minimal length in an orbit closure as a semiclassical limit [1.6312226592634047]
Invariant theory states that orbit of a vector v is separated from the origin if and only if some homogeneous invariant is nonzero on v.
This connection to optimization has led to efficient algorithms for many problems in invariant theory.
We provide a new and independent elementary proof inspired by the Fourier-analytic proof of the local central limit theorem.
arXiv Detail & Related papers (2020-04-30T15:19:07Z) - Hardness of Random Optimization Problems for Boolean Circuits,
Low-Degree Polynomials, and Langevin Dynamics [78.46689176407936]
We show that families of algorithms fail to produce nearly optimal solutions with high probability.
For the case of Boolean circuits, our results improve the state-of-the-art bounds known in circuit complexity theory.
arXiv Detail & Related papers (2020-04-25T05:45:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.