A Thorough View of Exact Inference in Graphs from the Degree-4
Sum-of-Squares Hierarchy
- URL: http://arxiv.org/abs/2102.08019v1
- Date: Tue, 16 Feb 2021 08:36:19 GMT
- Title: A Thorough View of Exact Inference in Graphs from the Degree-4
Sum-of-Squares Hierarchy
- Authors: Kevin Bello, Chuyang Ke and Jean Honorio
- Abstract summary: We tackle the problem of exactly recovering an unknown ground-truth binary labeling of the nodes from a single corrupted observation of each edge.
We apply a hierarchy of relaxations known as the sum-of-squares hierarchy, to the problem.
We show that the solution of the dual of the relaxed problem is related to finding edge weights of the Johnson and Kneser graphs.
- Score: 37.34153902687548
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Performing inference in graphs is a common task within several machine
learning problems, e.g., image segmentation, community detection, among others.
For a given undirected connected graph, we tackle the statistical problem of
exactly recovering an unknown ground-truth binary labeling of the nodes from a
single corrupted observation of each edge. Such problem can be formulated as a
quadratic combinatorial optimization problem over the boolean hypercube, where
it has been shown before that one can (with high probability and in polynomial
time) exactly recover the ground-truth labeling of graphs that have an
isoperimetric number that grows with respect to the number of nodes (e.g.,
complete graphs, regular expanders). In this work, we apply a powerful
hierarchy of relaxations, known as the sum-of-squares (SoS) hierarchy, to the
combinatorial problem. Motivated by empirical evidence on the improvement in
exact recoverability, we center our attention on the degree-4 SoS relaxation
and set out to understand the origin of such improvement from a graph
theoretical perspective. We show that the solution of the dual of the relaxed
problem is related to finding edge weights of the Johnson and Kneser graphs,
where the weights fulfill the SoS constraints and intuitively allow the input
graph to increase its algebraic connectivity. Finally, as byproduct of our
analysis, we derive a novel Cheeger-type lower bound for the algebraic
connectivity of graphs with signed edge weights.
Related papers
- Contrastive Graph Clustering in Curvature Spaces [74.03252813800334]
We present a novel end-to-end contrastive graph clustering model named CONGREGATE.
To support geometric clustering, we construct a theoretically grounded Heterogeneous Curvature Space.
We then train the graph clusters by an augmentation-free reweighted contrastive approach.
arXiv Detail & Related papers (2023-05-05T14:04:52Z) - Gradient scarcity with Bilevel Optimization for Graph Learning [0.0]
gradient scarcity occurs when learning graphs by minimizing a loss on a subset of nodes causes edges between unlabelled nodes that are far from labelled ones to receive zero gradients.
We give a precise mathematical characterization of this phenomenon, and prove it also emerges in bilevel optimization.
To alleviate this issue, we study several solutions: we propose to resort to latent graph learning using a Graph-to-Graph model (G2G), graph regularization to impose a prior structure on the graph, or optimizing on a larger graph than the original one with a reduced diameter.
arXiv Detail & Related papers (2023-03-24T12:37:43Z) - Matching Correlated Inhomogeneous Random Graphs using the $k$-core
Estimator [5.685589351789462]
We study the so-called emph$k$-core estimator, which outputs a correspondence that induces a large, common subgraph of both graphs.
We specialize our general framework to derive new results on exact and partial recovery in correlated block models, correlated Chung-Lu geometric graphs, and correlated random graphs.
arXiv Detail & Related papers (2023-02-10T18:21:35Z) - Joint graph learning from Gaussian observations in the presence of
hidden nodes [26.133725549667734]
We propose a joint graph learning method that takes into account the presence of hidden (latent) variables.
We exploit the structure resulting from the previous considerations to propose a convex optimization problem.
We compare the proposed algorithm with different baselines and evaluate its performance over synthetic and real-world graphs.
arXiv Detail & Related papers (2022-12-04T13:03:41Z) - Learning Graphs from Smooth Signals under Moment Uncertainty [23.868075779606425]
We consider the problem of inferring the graph structure from a given set of graph signals.
Traditional graph learning models do not take this distributional uncertainty into account.
arXiv Detail & Related papers (2021-05-12T06:47:34Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Hamiltonian systems, Toda lattices, Solitons, Lax Pairs on weighted
Z-graded graphs [62.997667081978825]
We identify conditions which allow one to lift one dimensional solutions to solutions on graphs.
We show that even for a simple example of a topologically interesting graph the corresponding non-trivial Lax pairs and associated unitary transformations do not lift to a Lax pair on the Z-graded graph.
arXiv Detail & Related papers (2020-08-11T17:58:13Z) - Online Dense Subgraph Discovery via Blurred-Graph Feedback [87.9850024070244]
We introduce a novel learning problem for dense subgraph discovery.
We first propose a edge-time algorithm that obtains a nearly-optimal solution with high probability.
We then design a more scalable algorithm with a theoretical guarantee.
arXiv Detail & Related papers (2020-06-24T11:37:33Z) - Wasserstein-based Graph Alignment [56.84964475441094]
We cast a new formulation for the one-to-many graph alignment problem, which aims at matching a node in the smaller graph with one or more nodes in the larger graph.
We show that our method leads to significant improvements with respect to the state-of-the-art algorithms for each of these tasks.
arXiv Detail & Related papers (2020-03-12T22:31:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.