Optimal Low-Degree Hardness of Maximum Independent Set
- URL: http://arxiv.org/abs/2010.06563v2
- Date: Thu, 12 Nov 2020 16:44:49 GMT
- Title: Optimal Low-Degree Hardness of Maximum Independent Set
- Authors: Alexander S. Wein
- Abstract summary: We study the algorithmic task of finding a large independent set in a sparse ErdHos-R'enyi random graph.
We show that the class of low-degree algorithms can find independent sets of half-optimal size but no larger.
- Score: 93.59919600451487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the algorithmic task of finding a large independent set in a sparse
Erd\H{o}s-R\'{e}nyi random graph with $n$ vertices and average degree $d$. The
maximum independent set is known to have size $(2 \log d / d)n$ in the double
limit $n \to \infty$ followed by $d \to \infty$, but the best known
polynomial-time algorithms can only find an independent set of half-optimal
size $(\log d / d)n$. We show that the class of low-degree polynomial
algorithms can find independent sets of half-optimal size but no larger,
improving upon a result of Gamarnik, Jagannath, and the author. This
generalizes earlier work by Rahman and Vir\'ag, which proved the analogous
result for the weaker class of local algorithms.
Related papers
- A Near-Linear Time Approximation Algorithm for Beyond-Worst-Case Graph Clustering [18.29151197560866]
We consider the semi-random graph model of [Makarychev, Makarychev and Vijayaraghavan, STOC'12].
A time algorithm is known to approximate the Balanced Cut problem up to value $O(alpha)$ [MMV'12] as long as the cut $(A, B)$ has size $Omega(alpha)$.
We study the fine-grained complexity of the problem and present the first near-linear time subroutine that achieves similar performances to that of [MMV'12].
arXiv Detail & Related papers (2024-06-07T11:40:54Z) - The Low-Degree Hardness of Finding Large Independent Sets in Sparse Random Hypergraphs [0.0]
We show that the class of low-degree algorithms can find independent sets of density $left(fraclog d(r-1)dright)1/(r-1)$ but no larger.
While the graph case has been extensively studied, this work is the first to consider statistical-computational gaps of optimization problems on random hypergraphs.
arXiv Detail & Related papers (2024-04-05T00:25:00Z) - Do you know what q-means? [50.045011844765185]
Clustering is one of the most important tools for analysis of large datasets.
We present an improved version of the "$q$-means" algorithm for clustering.
We also present a "dequantized" algorithm for $varepsilon which runs in $Obig(frack2varepsilon2(sqrtkd + log(Nd))big.
arXiv Detail & Related papers (2023-08-18T17:52:12Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm [64.13217062232874]
One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
arXiv Detail & Related papers (2022-02-15T03:49:28Z) - Finding the KT partition of a weighted graph in near-linear time [1.572727650614088]
Kawarabayashi and Thorup gave a near-trivial time deterministic algorithm for minimum cut in a simple graph $G = (V,E)$.
We give a near-linear time randomized algorithm to find the $(1+varepsilon)$-KT partition of a weighted graph.
arXiv Detail & Related papers (2021-11-02T05:26:10Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.