Non-adaptive Learning of Random Hypergraphs with Queries
- URL: http://arxiv.org/abs/2501.12771v1
- Date: Wed, 22 Jan 2025 10:14:27 GMT
- Title: Non-adaptive Learning of Random Hypergraphs with Queries
- Authors: Bethany Austhof, Lev Reyzin, Erasmo Tani,
- Abstract summary: We study the problem of learning a hidden hypergraph $G=(V,E)$ by making a single batch of queries (non-adaptively)
We generalize the result of Li et al. to the setting of random $k$-uniform hypergraphs.
- Score: 0.9831489366502302
- License:
- Abstract: We study the problem of learning a hidden hypergraph $G=(V,E)$ by making a single batch of queries (non-adaptively). We consider the hyperedge detection model, in which every query must be of the form: ``Does this set $S\subseteq V$ contain at least one full hyperedge?'' In this model, it is known that there is no algorithm that allows to non-adaptively learn arbitrary hypergraphs by making fewer than $\Omega(\min\{m^2\log n, n^2\})$ even when the hypergraph is constrained to be $2$-uniform (i.e. the hypergraph is simply a graph). Recently, Li et al. overcame this lower bound in the setting in which $G$ is a graph by assuming that the graph learned is sampled from an Erd\H{o}s-R\'enyi model. We generalize the result of Li et al. to the setting of random $k$-uniform hypergraphs. To achieve this result, we leverage a novel equivalence between the problem of learning a single hyperedge and the standard group testing problem. This latter result may also be of independent interest.
Related papers
- Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit [75.4661041626338]
We study the problem of gradient descent learning of a single-index target function $f_*(boldsymbolx) = textstylesigma_*left(langleboldsymbolx,boldsymbolthetarangleright)$
We prove that a two-layer neural network optimized by an SGD-based algorithm learns $f_*$ with a complexity that is not governed by information exponents.
arXiv Detail & Related papers (2024-06-03T17:56:58Z) - Hypergraph $p$-Laplacian regularization on point clouds for data interpolation [3.79830302036482]
Hypergraphs are widely used to model higher-order relations in data.
We define the $varepsilon_n$-ball hypergraph and the $k_n$-nearest neighbor hypergraph on a point cloud.
We prove the variational consistency between the hypergraph $p$-Laplacian regularization and the $p$-Laplacian regularization in a semi-supervised setting.
arXiv Detail & Related papers (2024-05-02T09:17:32Z) - Detection of Dense Subhypergraphs by Low-Degree Polynomials [72.4451045270967]
Detection of a planted dense subgraph in a random graph is a fundamental statistical and computational problem.
We consider detecting the presence of a planted $Gr(ngamma, n-alpha)$ subhypergraph in a $Gr(n, n-beta) hypergraph.
Our results are already new in the graph case $r=2$, as we consider the subtle log-density regime where hardness based on average-case reductions is not known.
arXiv Detail & Related papers (2023-04-17T10:38:08Z) - Core-periphery Models for Hypergraphs [0.0]
We introduce a random hypergraph model for core-periphery structure.
We develop a novel statistical inference algorithm that is able to scale to large hypergraphs with runtime that is practically linear wrt.
Our inference algorithm is capable of learning embeddings that correspond to the reputation (rank) of a node within the hypergraph.
arXiv Detail & Related papers (2022-06-01T22:11:44Z) - Hypergraph Convolutional Networks via Equivalency between Hypergraphs
and Undirected Graphs [59.71134113268709]
We present General Hypergraph Spectral Convolution(GHSC), a general learning framework that can handle EDVW and EIVW hypergraphs.
In this paper, we show that the proposed framework can achieve state-of-the-art performance.
Experiments from various domains including social network analysis, visual objective classification, protein learning demonstrate that the proposed framework can achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-03-31T10:46:47Z) - Learning Low Degree Hypergraphs [6.062751776009752]
We study the problem of learning a hypergraph via edge detecting queries.
In this paper, we aim to identify families of hypergraphs that can be learned without suffering from a query complexity that grows exponentially in the size of the edges.
arXiv Detail & Related papers (2022-02-21T04:38:24Z) - Fast Graph Sampling for Short Video Summarization using Gershgorin Disc
Alignment [52.577757919003844]
We study the problem of efficiently summarizing a short video into several paragraphs, leveraging recent progress in fast graph sampling.
Experimental results show that our algorithm achieves comparable video summarization as state-of-the-art methods, at a substantially reduced complexity.
arXiv Detail & Related papers (2021-10-21T18:43:00Z) - Random Subgraph Detection Using Queries [29.192695995340653]
The planted densest subgraph detection problem refers to the task of testing whether in a given (random) graph there is a subgraph that is unusually dense.
In this paper, we consider a natural variant of the above problem, where one can only observe a relatively small part of the graph using adaptive edge queries.
For this model, we determine the number of queries necessary and sufficient (accompanied with a quasi-polynomial optimal algorithm) for detecting the presence of the planted subgraph.
arXiv Detail & Related papers (2021-10-02T07:41:17Z) - Hardness of Learning Halfspaces with Massart Noise [56.98280399449707]
We study the complexity of PAC learning halfspaces in the presence of Massart (bounded) noise.
We show that there an exponential gap between the information-theoretically optimal error and the best error that can be achieved by a SQ algorithm.
arXiv Detail & Related papers (2020-12-17T16:43:11Z) - Scalable Deep Generative Modeling for Sparse Graphs [105.60961114312686]
Existing deep neural methods require $Omega(n2)$ complexity by building up the adjacency matrix.
We develop a novel autoregressive model, named BiGG, that utilizes this sparsity to avoid generating the full adjacency matrix.
During training this autoregressive model can be parallelized with $O(log n)$ synchronization stages.
arXiv Detail & Related papers (2020-06-28T04:37:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.