Position-based Hash Embeddings For Scaling Graph Neural Networks
- URL: http://arxiv.org/abs/2109.00101v1
- Date: Tue, 31 Aug 2021 22:42:25 GMT
- Title: Position-based Hash Embeddings For Scaling Graph Neural Networks
- Authors: Maria Kalantzi, George Karypis
- Abstract summary: Graph Neural Networks (GNNs) compute node representations by taking into account the topology of the node's ego-network and the features of the ego-network's nodes.
When the nodes do not have high-quality features, GNNs learn an embedding layer to compute node embeddings and use them as input features.
To reduce the memory associated with this embedding layer, hashing-based approaches, commonly used in applications like NLP and recommender systems, can potentially be used.
We present approaches that take advantage of the nodes' position in the graph to dramatically reduce the memory required.
- Score: 8.87527266373087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) bring the power of deep representation learning
to graph and relational data and achieve state-of-the-art performance in many
applications. GNNs compute node representations by taking into account the
topology of the node's ego-network and the features of the ego-network's nodes.
When the nodes do not have high-quality features, GNNs learn an embedding layer
to compute node embeddings and use them as input features. However, the size of
the embedding layer is linear to the graph size and does not scale to graphs
with hundreds of millions of nodes. To reduce the memory associated with this
embedding layer, hashing-based approaches, commonly used in applications like
NLP and recommender systems, can potentially be used. However, a direct
application of these ideas fails to exploit the fact that in many real-world
graphs, nodes that are topologically close will tend to be related to each
other (homophily) and as such their representations will be similar.
In this work, we present approaches that take advantage of the nodes'
position in the graph to dramatically reduce the memory required, with minimal
if any degradation in the quality of the resulting GNN model. Our approaches
decompose a node's embedding into two components: a position-specific component
and a node-specific component. The position-specific component models homophily
and the node-specific component models the node-to-node variation. Extensive
experiments using different datasets and GNN models show that in nearly all
cases, our methods are able to reduce the memory requirements by 86% to 97%
while achieving better classification accuracy than other competing approaches,
including the full embeddings.
Related papers
- Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - Content Augmented Graph Neural Networks [0.824969449883056]
We propose augmenting nodes' embeddings by embeddings generated from their content, at higher GNN layers.
We suggest methods such as using an auto-encoder or building a content graph, to generate content embeddings.
arXiv Detail & Related papers (2023-11-21T17:30:57Z) - Enhanced Graph Neural Networks with Ego-Centric Spectral Subgraph
Embeddings Augmentation [11.841882902141696]
We present a novel approach denoted as Ego-centric Spectral subGraph Embedding Augmentation (ESGEA)
ESGEA aims to enhance and design node features, particularly in scenarios where information is lacking.
We evaluate the proposed method in a social network graph classification task where node attributes are unavailable.
arXiv Detail & Related papers (2023-10-10T14:57:29Z) - Collaborative Graph Neural Networks for Attributed Network Embedding [63.39495932900291]
Graph neural networks (GNNs) have shown prominent performance on attributed network embedding.
We propose COllaborative graph Neural Networks--CONN, a tailored GNN architecture for network embedding.
arXiv Detail & Related papers (2023-07-22T04:52:27Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - LSGNN: Towards General Graph Neural Network in Node Classification by
Local Similarity [59.41119013018377]
We propose to use the local similarity (LocalSim) to learn node-level weighted fusion, which can also serve as a plug-and-play module.
For better fusion, we propose a novel and efficient Initial Residual Difference Connection (IRDC) to extract more informative multi-hop information.
Our proposed method, namely Local Similarity Graph Neural Network (LSGNN), can offer comparable or superior state-of-the-art performance on both homophilic and heterophilic graphs.
arXiv Detail & Related papers (2023-05-07T09:06:11Z) - Embedding Compression with Hashing for Efficient Representation Learning
in Large-Scale Graph [21.564894767364397]
Graph neural networks (GNNs) are deep learning models designed specifically for graph data.
We develop a node embedding compression method where each node is compactly represented with a bit vector instead of a floating-point vector.
We show that the proposed node embedding compression method achieves superior performance compared to the alternatives.
arXiv Detail & Related papers (2022-08-11T05:43:39Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - NDGGNET-A Node Independent Gate based Graph Neural Networks [6.155450481110693]
For nodes with sparse connectivity, it is difficult to obtain enough information through a single GNN layer.
In this thesis, we define a novel framework that allows the normal GNN model to accommodate more layers.
Experimental results show that our proposed model can effectively increase the model depth and perform well on several datasets.
arXiv Detail & Related papers (2022-05-11T08:51:04Z) - Graph Neural Networks with Feature and Structure Aware Random Walk [7.143879014059894]
We show that in typical heterphilous graphs, the edges may be directed, and whether to treat the edges as is or simply make them undirected greatly affects the performance of the GNN models.
We develop a model that adaptively learns the directionality of the graph, and exploits the underlying long-distance correlations between nodes.
arXiv Detail & Related papers (2021-11-19T08:54:21Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.