Force-directed graph embedding with hops distance
- URL: http://arxiv.org/abs/2309.05865v1
- Date: Mon, 11 Sep 2023 23:08:03 GMT
- Title: Force-directed graph embedding with hops distance
- Authors: Hamidreza Lotfalizadeh, Mohammad Al Hasan
- Abstract summary: We propose a novel force-directed graph embedding method that preserves graph topology and structural features.
Our method is intuitive, parallelizable, and highly scalable.
We evaluate our method on several graph analysis tasks and show that it achieves competitive performance compared to state-of-the-art unsupervised embedding techniques.
- Score: 5.214207581551816
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph embedding has become an increasingly important technique for analyzing
graph-structured data. By representing nodes in a graph as vectors in a
low-dimensional space, graph embedding enables efficient graph processing and
analysis tasks like node classification, link prediction, and visualization. In
this paper, we propose a novel force-directed graph embedding method that
utilizes the steady acceleration kinetic formula to embed nodes in a way that
preserves graph topology and structural features. Our method simulates a set of
customized attractive and repulsive forces between all node pairs with respect
to their hop distance. These forces are then used in Newton's second law to
obtain the acceleration of each node. The method is intuitive, parallelizable,
and highly scalable. We evaluate our method on several graph analysis tasks and
show that it achieves competitive performance compared to state-of-the-art
unsupervised embedding techniques.
Related papers
- Structure-free Graph Condensation: From Large-scale Graphs to Condensed
Graph-free Data [91.27527985415007]
Existing graph condensation methods rely on the joint optimization of nodes and structures in the condensed graph.
We advocate a new Structure-Free Graph Condensation paradigm, named SFGC, to distill a large-scale graph into a small-scale graph node set.
arXiv Detail & Related papers (2023-06-05T07:53:52Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Demystifying Graph Convolution with a Simple Concatenation [6.542119695695405]
We quantify the information overlap between graph topology, node features, and labels.
We show that graph concatenation is a simple but more flexible alternative to graph convolution.
arXiv Detail & Related papers (2022-07-18T16:39:33Z) - Template based Graph Neural Network with Optimal Transport Distances [11.56532171513328]
Current Graph Neural Networks (GNN) architectures rely on two important components: node features embedding through message passing, and aggregation with a specialized form of pooling.
We propose in this work a novel point of view, which places distances to some learnable graph templates at the core of the graph representation.
This distance embedding is constructed thanks to an optimal transport distance: the Fused Gromov-Wasserstein (FGW) distance.
arXiv Detail & Related papers (2022-05-31T12:24:01Z) - Edge but not Least: Cross-View Graph Pooling [76.71497833616024]
This paper presents a cross-view graph pooling (Co-Pooling) method to better exploit crucial graph structure information.
Through cross-view interaction, edge-view pooling and node-view pooling seamlessly reinforce each other to learn more informative graph-level representations.
arXiv Detail & Related papers (2021-09-24T08:01:23Z) - Some Algorithms on Exact, Approximate and Error-Tolerant Graph Matching [3.655021726150369]
We present an extensive survey of various exact and inexact graph matching techniques.
A category of graph matching algorithms is presented, which reduces the graph size by removing the less important nodes.
We introduce a novel approach to measure graph similarity using geometric graphs.
arXiv Detail & Related papers (2020-12-30T18:51:06Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z) - Graph Convolutional Networks using Heat Kernel for Semi-supervised
Learning [47.18608594687675]
Key to graph-based semisupervised learning is capturing smoothness of labels or features over nodes by graph structure.
We propose GraphHeat, leveraging heat kernel to enhance low-frequency filters and enforce smoothness in the signal variation on the graph.
GraphHeat achieves state-of-the-art results in the task of graph-based semi-supervised classification across three benchmark datasets.
arXiv Detail & Related papers (2020-07-27T11:53:52Z) - GraphOpt: Learning Optimization Models of Graph Formation [72.75384705298303]
We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
arXiv Detail & Related papers (2020-07-07T16:51:39Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.