Embedding Compression with Hashing for Efficient Representation Learning
in Large-Scale Graph
- URL: http://arxiv.org/abs/2208.05648v1
- Date: Thu, 11 Aug 2022 05:43:39 GMT
- Title: Embedding Compression with Hashing for Efficient Representation Learning
in Large-Scale Graph
- Authors: Chin-Chia Michael Yeh, Mengting Gu, Yan Zheng, Huiyuan Chen, Javid
Ebrahimi, Zhongfang Zhuang, Junpeng Wang, Liang Wang, and Wei Zhang
- Abstract summary: Graph neural networks (GNNs) are deep learning models designed specifically for graph data.
We develop a node embedding compression method where each node is compactly represented with a bit vector instead of a floating-point vector.
We show that the proposed node embedding compression method achieves superior performance compared to the alternatives.
- Score: 21.564894767364397
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) are deep learning models designed specifically
for graph data, and they typically rely on node features as the input to the
first layer. When applying such a type of network on the graph without node
features, one can extract simple graph-based node features (e.g., number of
degrees) or learn the input node representations (i.e., embeddings) when
training the network. While the latter approach, which trains node embeddings,
more likely leads to better performance, the number of parameters associated
with the embeddings grows linearly with the number of nodes. It is therefore
impractical to train the input node embeddings together with GNNs within
graphics processing unit (GPU) memory in an end-to-end fashion when dealing
with industrial-scale graph data. Inspired by the embedding compression methods
developed for natural language processing (NLP) tasks, we develop a node
embedding compression method where each node is compactly represented with a
bit vector instead of a floating-point vector. The parameters utilized in the
compression method can be trained together with GNNs. We show that the proposed
node embedding compression method achieves superior performance compared to the
alternatives.
Related papers
- Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - GraphRARE: Reinforcement Learning Enhanced Graph Neural Network with Relative Entropy [21.553180564868306]
GraphRARE is a framework built upon node relative entropy and deep reinforcement learning.
An innovative node relative entropy is used to measure mutual information between node pairs.
A deep reinforcement learning-based algorithm is developed to optimize the graph topology.
arXiv Detail & Related papers (2023-12-15T11:30:18Z) - Content Augmented Graph Neural Networks [0.824969449883056]
We propose augmenting nodes' embeddings by embeddings generated from their content, at higher GNN layers.
We suggest methods such as using an auto-encoder or building a content graph, to generate content embeddings.
arXiv Detail & Related papers (2023-11-21T17:30:57Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Inferential SIR-GN: Scalable Graph Representation Learning [0.4699313647907615]
Graph representation learning methods generate numerical vector representations for the nodes in a network.
In this work, we propose Inferential SIR-GN, a model which is pre-trained on random graphs, then computes node representations rapidly.
We demonstrate that the model is able to capture node's structural role information, and show excellent performance at node and graph classification tasks, on unseen networks.
arXiv Detail & Related papers (2021-11-08T20:56:37Z) - Position-based Hash Embeddings For Scaling Graph Neural Networks [8.87527266373087]
Graph Neural Networks (GNNs) compute node representations by taking into account the topology of the node's ego-network and the features of the ego-network's nodes.
When the nodes do not have high-quality features, GNNs learn an embedding layer to compute node embeddings and use them as input features.
To reduce the memory associated with this embedding layer, hashing-based approaches, commonly used in applications like NLP and recommender systems, can potentially be used.
We present approaches that take advantage of the nodes' position in the graph to dramatically reduce the memory required.
arXiv Detail & Related papers (2021-08-31T22:42:25Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - Hierarchical Representation Learning in Graph Neural Networks with Node Decimation Pooling [31.812988573924674]
In graph neural networks (GNNs), pooling operators compute local summaries of input graphs to capture their global properties.
We propose the Node Decimation Pooling (NDP), a pooling operator for GNNs that generates coarser graphs while preserving the overall graph topology.
NDP is more efficient compared to state-of-the-art graph pooling operators while reaching, at the same time, competitive performance on a significant variety of graph classification tasks.
arXiv Detail & Related papers (2019-10-24T21:42:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.