Learning Cartesian Product Graphs with Laplacian Constraints
- URL: http://arxiv.org/abs/2402.08105v1
- Date: Mon, 12 Feb 2024 22:48:30 GMT
- Title: Learning Cartesian Product Graphs with Laplacian Constraints
- Authors: Changhao Shi and Gal Mishne
- Abstract summary: We study the problem of learning Cartesian product graphs under Laplacian constraints.
We establish statistical consistency for the penalized maximum likelihood estimation.
We also extend our method for efficient joint graph learning and imputation in the presence of structural missing values.
- Score: 10.15283812819547
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Laplacian learning, also known as network topology inference, is a
problem of great interest to multiple communities. In Gaussian graphical models
(GM), graph learning amounts to endowing covariance selection with the
Laplacian structure. In graph signal processing (GSP), it is essential to infer
the unobserved graph from the outputs of a filtering system. In this paper, we
study the problem of learning Cartesian product graphs under Laplacian
constraints. The Cartesian graph product is a natural way for modeling
higher-order conditional dependencies and is also the key for generalizing GSP
to multi-way tensors. We establish statistical consistency for the penalized
maximum likelihood estimation (MLE) of a Cartesian product Laplacian, and
propose an efficient algorithm to solve the problem. We also extend our method
for efficient joint graph learning and imputation in the presence of structural
missing values. Experiments on synthetic and real-world datasets demonstrate
that our method is superior to previous GSP and GM methods.
Related papers
- Differentiable Proximal Graph Matching [40.41380102260085]
We introduce an algorithm for graph matching based on the proximal operator, referred to as differentiable proximal graph matching (DPGM)
The whole algorithm can be considered as a differentiable map from the graph affinity matrix to the prediction of node correspondence.
Numerical experiments show that PGM outperforms existing graph matching algorithms on diverse datasets.
arXiv Detail & Related papers (2024-05-26T08:17:13Z) - Gradformer: Graph Transformer with Exponential Decay [69.50738015412189]
Self-attention mechanism in Graph Transformers (GTs) overlooks the graph's inductive biases, particularly biases related to structure.
This paper presents Gradformer, a method innovatively integrating GT with the intrinsic inductive bias.
Gradformer consistently outperforms the Graph Neural Network and GT baseline models in various graph classification and regression tasks.
arXiv Detail & Related papers (2024-04-24T08:37:13Z) - GSINA: Improving Subgraph Extraction for Graph Invariant Learning via
Graph Sinkhorn Attention [52.67633391931959]
Graph invariant learning (GIL) has been an effective approach to discovering the invariant relationships between graph data and its labels.
We propose a novel graph attention mechanism called Graph Sinkhorn Attention (GSINA)
GSINA is able to obtain meaningful, differentiable invariant subgraphs with controllable sparsity and softness.
arXiv Detail & Related papers (2024-02-11T12:57:16Z) - Gradient scarcity with Bilevel Optimization for Graph Learning [0.0]
gradient scarcity occurs when learning graphs by minimizing a loss on a subset of nodes causes edges between unlabelled nodes that are far from labelled ones to receive zero gradients.
We give a precise mathematical characterization of this phenomenon, and prove it also emerges in bilevel optimization.
To alleviate this issue, we study several solutions: we propose to resort to latent graph learning using a Graph-to-Graph model (G2G), graph regularization to impose a prior structure on the graph, or optimizing on a larger graph than the original one with a reduced diameter.
arXiv Detail & Related papers (2023-03-24T12:37:43Z) - Latent Graph Inference using Product Manifolds [0.0]
We generalize the discrete Differentiable Graph Module (dDGM) for latent graph learning.
Our novel approach is tested on a wide range of datasets, and outperforms the original dDGM model.
arXiv Detail & Related papers (2022-11-26T22:13:06Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - Learning non-Gaussian graphical models via Hessian scores and triangular
transport [6.308539010172309]
We propose an algorithm for learning the Markov structure of continuous and non-Gaussian distributions.
Our algorithm SING estimates the density using a deterministic coupling, induced by a triangular transport map, and iteratively exploits sparse structure in the map to reveal sparsity in the graph.
arXiv Detail & Related papers (2021-01-08T16:42:42Z) - Product Graph Learning from Multi-domain Data with Sparsity and Rank
Constraints [17.15829643665034]
We propose an efficient iterative solver for learning sparse product graphs from data.
We extend this solver to infer multi-component graph factors with applications to product graph clustering.
The efficacy of the developed framework is demonstrated using several numerical experiments on synthetic data and real data.
arXiv Detail & Related papers (2020-12-15T04:59:32Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.