Graph Spectral Embedding using the Geodesic Betweeness Centrality
- URL: http://arxiv.org/abs/2205.03544v1
- Date: Sat, 7 May 2022 04:11:23 GMT
- Title: Graph Spectral Embedding using the Geodesic Betweeness Centrality
- Authors: Shay Deutsch and Stefano Soatto
- Abstract summary: We introduce the Graph Sylvester Embedding (GSE), an unsupervised graph representation of local similarity, connectivity, and global structure.
GSE uses the solution of the Sylvester equation to capture both network structure and neighborhood proximity in a single representation.
- Score: 76.27138343125985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce the Graph Sylvester Embedding (GSE), an unsupervised graph
representation of local similarity, connectivity, and global structure. GSE
uses the solution of the Sylvester equation to capture both network structure
and neighborhood proximity in a single representation. Unlike embeddings based
on the eigenvectors of the Laplacian, GSE incorporates two or more basis
functions, for instance using the Laplacian and the affinity matrix. Such basis
functions are constructed not from the original graph, but from one whose
weights measure the centrality of an edge (the fraction of the number of
shortest paths that pass through that edge) in the original graph. This allows
more flexibility and control to represent complex network structure and shows
significant improvements over the state of the art when used for data analysis
tasks such as predicting failed edges in material science and network alignment
in the human-SARS CoV-2 protein-protein interactome.
Related papers
- A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - Graph Transformer GANs for Graph-Constrained House Generation [223.739067413952]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The GTGAN learns effective graph node relations in an end-to-end fashion for the challenging graph-constrained house generation task.
arXiv Detail & Related papers (2023-03-14T20:35:45Z) - Template based Graph Neural Network with Optimal Transport Distances [11.56532171513328]
Current Graph Neural Networks (GNN) architectures rely on two important components: node features embedding through message passing, and aggregation with a specialized form of pooling.
We propose in this work a novel point of view, which places distances to some learnable graph templates at the core of the graph representation.
This distance embedding is constructed thanks to an optimal transport distance: the Fused Gromov-Wasserstein (FGW) distance.
arXiv Detail & Related papers (2022-05-31T12:24:01Z) - SIGMA: A Structural Inconsistency Reducing Graph Matching Algorithm [21.1095092767297]
We propose a novel criterion to measure the graph matching accuracy, structural inconsistency (SI)
Specifically, SI incorporates the heat diffusion wavelet to accommodate the multi-hop structure of the graphs.
We show that SIGMA can be derived by using a mirror descent method to solve the Gromov-Wasserstein distance with a novel K-hop-structure-based matching costs.
arXiv Detail & Related papers (2022-02-06T15:18:37Z) - ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network [72.16255675586089]
We propose an Adaptive Curvature Exploration Hyperbolic Graph NeuralNetwork named ACE-HGNN to adaptively learn the optimal curvature according to the input graph and downstream tasks.
Experiments on multiple real-world graph datasets demonstrate a significant and consistent performance improvement in model quality with competitive performance and good generalization ability.
arXiv Detail & Related papers (2021-10-15T07:18:57Z) - SLGCN: Structure Learning Graph Convolutional Networks for Graphs under
Heterophily [5.619890178124606]
We propose a structure learning graph convolutional networks (SLGCNs) to alleviate the issue from two aspects.
Specifically, we design a efficient-spectral-clustering with anchors (ESC-ANCH) approach to efficiently aggregate feature representations from all similar nodes.
Experimental results on a wide range of benchmark datasets illustrate that the proposed SLGCNs outperform the stat-of-the-art GNN counterparts.
arXiv Detail & Related papers (2021-05-28T13:00:38Z) - Sparse Partial Least Squares for Coarse Noisy Graph Alignment [10.172041234280865]
Graph signal processing (GSP) provides a powerful framework for analyzing signals arising in a variety of domains.
We propose a novel regularized partial least squares method which both incorporates the observed graph structures and imposes sparsity in order to reflect the underlying block community structure.
arXiv Detail & Related papers (2021-04-06T21:52:15Z) - Spectral Embedding of Graph Networks [76.27138343125985]
We introduce an unsupervised graph embedding that trades off local node similarity and connectivity, and global structure.
The embedding is based on a generalized graph Laplacian, whose eigenvectors compactly capture both network structure and neighborhood proximity in a single representation.
arXiv Detail & Related papers (2020-09-30T04:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.