Product Graph Learning from Multi-domain Data with Sparsity and Rank
Constraints
- URL: http://arxiv.org/abs/2012.08090v1
- Date: Tue, 15 Dec 2020 04:59:32 GMT
- Title: Product Graph Learning from Multi-domain Data with Sparsity and Rank
Constraints
- Authors: Sai Kiran Kadambari, Sundeep Prabhakar Chepuri
- Abstract summary: We propose an efficient iterative solver for learning sparse product graphs from data.
We extend this solver to infer multi-component graph factors with applications to product graph clustering.
The efficacy of the developed framework is demonstrated using several numerical experiments on synthetic data and real data.
- Score: 17.15829643665034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we focus on learning product graphs from multi-domain data. We
assume that the product graph is formed by the Cartesian product of two smaller
graphs, which we refer to as graph factors. We pose the product graph learning
problem as the problem of estimating the graph factor Laplacian matrices. To
capture local interactions in data, we seek sparse graph factors and assume a
smoothness model for data. We propose an efficient iterative solver for
learning sparse product graphs from data. We then extend this solver to infer
multi-component graph factors with applications to product graph clustering by
imposing rank constraints on the graph Laplacian matrices. Although working
with smaller graph factors is computationally more attractive, not all graphs
may readily admit an exact Cartesian product factorization. To this end, we
propose efficient algorithms to approximate a graph by a nearest Cartesian
product of two smaller graphs. The efficacy of the developed framework is
demonstrated using several numerical experiments on synthetic data and real
data.
Related papers
- Learning Cartesian Product Graphs with Laplacian Constraints [10.15283812819547]
We study the problem of learning Cartesian product graphs under Laplacian constraints.
We establish statistical consistency for the penalized maximum likelihood estimation.
We also extend our method for efficient joint graph learning and imputation in the presence of structural missing values.
arXiv Detail & Related papers (2024-02-12T22:48:30Z) - The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph
Structure [18.00833762891405]
Graph Lottery Ticket (GLT) Hypothesis: There is an extremely sparse backbone for every graph.
We study 8 key metrics of interest that directly influence the performance of graph learning algorithms.
We propose a straightforward and efficient algorithm for finding these GLTs in arbitrary graphs.
arXiv Detail & Related papers (2023-12-08T00:24:44Z) - A Unified Framework for Optimization-Based Graph Coarsening [5.720402020129441]
Given a large graph, graph coarsening aims to learn a smaller-tractable graph while preserving the properties of the originally given graph.
The proposed framework lies in the unification of graph learning and dimensionality reduction.
It is established that the learned coarsened graph is $epsin(0,1)$ similar to the original graph.
arXiv Detail & Related papers (2022-10-02T06:31:42Z) - CGMN: A Contrastive Graph Matching Network for Self-Supervised Graph
Similarity Learning [65.1042892570989]
We propose a contrastive graph matching network (CGMN) for self-supervised graph similarity learning.
We employ two strategies, namely cross-view interaction and cross-graph interaction, for effective node representation learning.
We transform node representations into graph-level representations via pooling operations for graph similarity computation.
arXiv Detail & Related papers (2022-05-30T13:20:26Z) - Graph Coloring: Comparing Cluster Graphs to Factor Graphs [0.0]
We present a means of formulating and solving graph coloring problems with probabilistic graphical models.
In contrast to the prevalent literature that uses factor graphs for this purpose, we instead approach it from a cluster graph perspective.
arXiv Detail & Related papers (2021-10-05T13:57:30Z) - Learning Graphon Autoencoders for Generative Graph Modeling [91.32624399902755]
Graphon is a nonparametric model that generates graphs with arbitrary sizes and can be induced from graphs easily.
We propose a novel framework called textitgraphon autoencoder to build an interpretable and scalable graph generative model.
A linear graphon factorization model works as a decoder, leveraging the latent representations to reconstruct the induced graphons.
arXiv Detail & Related papers (2021-05-29T08:11:40Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Wasserstein-based Graph Alignment [56.84964475441094]
We cast a new formulation for the one-to-many graph alignment problem, which aims at matching a node in the smaller graph with one or more nodes in the larger graph.
We show that our method leads to significant improvements with respect to the state-of-the-art algorithms for each of these tasks.
arXiv Detail & Related papers (2020-03-12T22:31:59Z) - Learning Product Graphs Underlying Smooth Graph Signals [15.023662220197242]
This paper devises a method to learn structured graphs from data that are given in the form of product graphs.
To this end, first the graph learning problem is posed as a linear program, which (on average) outperforms the state-of-the-art graph learning algorithms.
arXiv Detail & Related papers (2020-02-26T03:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.