Force2Vec: Parallel force-directed graph embedding
- URL: http://arxiv.org/abs/2009.10035v1
- Date: Thu, 17 Sep 2020 03:53:25 GMT
- Title: Force2Vec: Parallel force-directed graph embedding
- Authors: Md. Khaledur Rahman, Majedul Haque Sujon, Ariful Azad
- Abstract summary: A graph embedding algorithm embeds a graph into a low-dimensional space such that the embedding preserves the inherent properties of the graph.
We develop Force2Vec that uses force-directed graph layout models in a graph embedding setting with an aim to excel in both machine learning (ML) and visualization tasks.
- Score: 3.577310844634503
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A graph embedding algorithm embeds a graph into a low-dimensional space such
that the embedding preserves the inherent properties of the graph. While graph
embedding is fundamentally related to graph visualization, prior work did not
exploit this connection explicitly. We develop Force2Vec that uses
force-directed graph layout models in a graph embedding setting with an aim to
excel in both machine learning (ML) and visualization tasks. We make Force2Vec
highly parallel by mapping its core computations to linear algebra and
utilizing multiple levels of parallelism available in modern processors. The
resultant algorithm is an order of magnitude faster than existing methods (43x
faster than DeepWalk, on average) and can generate embeddings from graphs with
billions of edges in a few hours. In comparison to existing methods, Force2Vec
is better in graph visualization and performs comparably or better in ML tasks
such as link prediction, node classification, and clustering. Source code is
available at https://github.com/HipGraph/Force2Vec.
Related papers
- Learning on Large Graphs using Intersecting Communities [13.053266613831447]
MPNNs iteratively update each node's representation in an input graph by aggregating messages from the node's neighbors.
MPNNs might quickly become prohibitive for large graphs provided they are not very sparse.
We propose approximating the input graph as an intersecting community graph (ICG) -- a combination of intersecting cliques.
arXiv Detail & Related papers (2024-05-31T09:26:26Z) - Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs [60.71360240206726]
Large language models (LLMs) suffer from hallucinations, especially on knowledge-intensive tasks.
Existing works propose to augment LLMs with individual text units retrieved from external knowledge corpora.
We propose a framework called Graph Chain-of-thought (Graph-CoT) to augment LLMs with graphs by encouraging LLMs to reason on the graph iteratively.
arXiv Detail & Related papers (2024-04-10T15:41:53Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - CGMN: A Contrastive Graph Matching Network for Self-Supervised Graph
Similarity Learning [65.1042892570989]
We propose a contrastive graph matching network (CGMN) for self-supervised graph similarity learning.
We employ two strategies, namely cross-view interaction and cross-graph interaction, for effective node representation learning.
We transform node representations into graph-level representations via pooling operations for graph similarity computation.
arXiv Detail & Related papers (2022-05-30T13:20:26Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Scaling R-GCN Training with Graph Summarization [71.06855946732296]
Training of Relation Graph Convolutional Networks (R-GCN) does not scale well with the size of the graph.
In this work, we experiment with the use of graph summarization techniques to compress the graph.
We obtain reasonable results on the AIFB, MUTAG and AM datasets.
arXiv Detail & Related papers (2022-03-05T00:28:43Z) - Boosting Graph Embedding on a Single GPU [3.093890460224435]
We present GOSH, a GPU-based tool for embedding large-scale graphs with minimum hardware constraints.
It employs a novel graph coarsening algorithm to enhance the impact of updates and minimize the work for embedding.
It also incorporates a decomposition schema that enables any arbitrarily large graph to be embedded with a single GPU.
arXiv Detail & Related papers (2021-10-19T15:25:04Z) - FeatGraph: A Flexible and Efficient Backend for Graph Neural Network
Systems [23.258185277825888]
FeatGraph provides a flexible programming interface to express diverse GNN models.
FeatGraph speeds up end-to-end GNN training and inference by up to 32x on CPU and 7x on GPU.
arXiv Detail & Related papers (2020-08-26T03:17:05Z) - Graph topology inference benchmarks for machine learning [16.857405938139525]
We introduce several benchmarks specifically designed to reveal the relative merits and limitations of graph inference methods.
We also contrast some of the most prominent techniques in the literature.
arXiv Detail & Related papers (2020-07-16T09:40:32Z) - Wasserstein Embedding for Graph Learning [33.90471037116372]
Wasserstein Embedding for Graph Learning (WEGL) is a framework for embedding entire graphs in a vector space.
We leverage new insights on defining similarity between graphs as a function of the similarity between their node embedding distributions.
We evaluate our new graph embedding approach on various benchmark graph-property prediction tasks.
arXiv Detail & Related papers (2020-06-16T18:23:00Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.