Detection-Recovery Gap for Planted Dense Cycles
- URL: http://arxiv.org/abs/2302.06737v2
- Date: Wed, 21 Jun 2023 03:39:17 GMT
- Title: Detection-Recovery Gap for Planted Dense Cycles
- Authors: Cheng Mao, Alexander S. Wein, Shenduo Zhang
- Abstract summary: We consider a model where a dense cycle with expected bandwidth $n tau$ and edge density $p$ is planted in an ErdHos-R'enyi graph $G(n,q)$.
We characterize the computational thresholds for the associated detection and recovery problems for the class of low-degree algorithms.
- Score: 72.4451045270967
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Planted dense cycles are a type of latent structure that appears in many
applications, such as small-world networks in social sciences and sequence
assembly in computational biology. We consider a model where a dense cycle with
expected bandwidth $n \tau$ and edge density $p$ is planted in an
Erd\H{o}s-R\'enyi graph $G(n,q)$. We characterize the computational thresholds
for the associated detection and recovery problems for the class of low-degree
polynomial algorithms. In particular, a gap exists between the two thresholds
in a certain regime of parameters. For example, if $n^{-3/4} \ll \tau \ll
n^{-1/2}$ and $p = C q = \Theta(1)$ for a constant $C>1$, the detection problem
is computationally easy while the recovery problem is hard for low-degree
algorithms.
Related papers
- Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit [75.4661041626338]
We study the problem of gradient descent learning of a single-index target function $f_*(boldsymbolx) = textstylesigma_*left(langleboldsymbolx,boldsymbolthetarangleright)$ under isotropic Gaussian data.
We prove that a two-layer neural network optimized by an SGD-based algorithm learns $f_*$ of arbitrary link function with a sample and runtime complexity of $n asymp T asymp C(q) cdot d
arXiv Detail & Related papers (2024-06-03T17:56:58Z) - Information-Theoretic Thresholds for Planted Dense Cycles [52.076657911275525]
We study a random graph model for small-world networks which are ubiquitous in social and biological sciences.
For both detection and recovery of the planted dense cycle, we characterize the information-theoretic thresholds in terms of $n$, $tau$, and an edge-wise signal-to-noise ratio $lambda$.
arXiv Detail & Related papers (2024-02-01T03:39:01Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Statistical-Computational Tradeoffs in Mixed Sparse Linear Regression [20.00109111254507]
We show that the problem suffers from a $frackSNR2$-to-$frack2SNR2$ statistical-to-computational gap.
We also analyze a simple thresholding algorithm which, outside of the narrow regime where the problem is hard, solves the associated mixed regression detection problem.
arXiv Detail & Related papers (2023-03-03T18:03:49Z) - Inferring Hidden Structures in Random Graphs [13.031167737538881]
We study the two inference problems of detecting and recovering an isolated community of emphgeneral structure planted in a random graph.
We derive lower bounds for detecting/recovering the structure $Gamma_k$ in terms of the parameters $(n,k,q)$, as well as certain properties of $Gamma_k$, and exhibit computationally optimal algorithms that achieve these lower bounds.
arXiv Detail & Related papers (2021-10-05T09:39:51Z) - Complexity of zigzag sampling algorithm for strongly log-concave
distributions [6.336005544376984]
We study the computational complexity of zigzag sampling algorithm for strongly log-concave distributions.
We prove that the zigzag sampling algorithm achieves $varepsilon$ error in chi-square divergence with a computational cost equivalent to $Obigl.
arXiv Detail & Related papers (2020-12-21T03:10:21Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - Gradient-Based Empirical Risk Minimization using Local Polynomial
Regression [39.29885444997579]
A major goal of this literature has been to compare different algorithms, such as gradient descent (GD) or gradient descent (SGD)
We demonstrate that when the loss function is smooth in the data, we can learn the oracle at every iteration and beat the oracle complexities of both GD and SGD.
arXiv Detail & Related papers (2020-11-04T20:10:31Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.