Hyperdimensional Computing for Node Classification and Link Prediction
- URL: http://arxiv.org/abs/2402.17073v2
- Date: Sat, 20 Jul 2024 03:46:13 GMT
- Title: Hyperdimensional Computing for Node Classification and Link Prediction
- Authors: Abhishek Dalvi, Vasant Honavar,
- Abstract summary: We introduce a novel method for transductive learning on graphs using hyperdimensional representations.
The proposed approach encodes data samples using random projections into a very high-dimensional space.
It obviates the need for expensive iterative training of the sort required by deep learning methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel method for transductive learning on graphs using hyperdimensional representations. The proposed approach encodes data samples using random projections into a very high-dimensional space (hyperdimensional or HD space for short). It obviates the need for expensive iterative training of the sort required by deep learning methods. Specifically, we propose a Hyperdimensional Graph Learning (HDGL) algorithm. HDGL leverages the \emph{injectivity} property of node representations of a family of Graph Neural Networks (GNNs) to map node features to the HD space and then uses HD operators such as bundling and binding to aggregate information from the local neighborhood of each node. The resulting latent node representations support both node classification and link prediction tasks, unlike typical deep learning methods, which often require separate models for these tasks. We report results of experiments using widely used benchmark datasets which demonstrate that, on the node classification task, HDGL is competitive with the SOTA GNN methods with respect to accuracy, at substantially reduced computational cost. Furthermore, HDGL is well-suited for class incremental learning where the model has to learn to effectively discriminate between a growing number of classes. Our experiments also show that the HD representation constructed by HDGL supports link prediction at accuracies comparable to that of DeepWalk and related methods, although it falls short of SOTA Graph Neural Network (GNN) methods that rely on computationally expensive iterative training. We conclude that HDGL offers a computationally efficient alternative to graph neural networks for node classification, especially in settings that call for class-incremental learning or in applications that demand high accuracy models at significantly lower computational cost and learning time than possible with the SOTA GNNs.
Related papers
- Sparse Decomposition of Graph Neural Networks [20.768412002413843]
We propose an approach to reduce the number of nodes that are included during aggregation.
We achieve this through a sparse decomposition, learning to approximate node representations using a weighted sum of linearly transformed features.
We demonstrate via extensive experiments that our method outperforms other baselines designed for inference speedup.
arXiv Detail & Related papers (2024-10-25T17:52:16Z) - CiliaGraph: Enabling Expression-enhanced Hyper-Dimensional Computation in Ultra-Lightweight and One-Shot Graph Classification on Edge [1.8726646412385333]
CiliaGraph is an enhanced expressive yet ultra-lightweight HDC model for graph classification.
CiliaGraph reduces memory usage and accelerates training speed by an average of 292 times.
arXiv Detail & Related papers (2024-05-29T12:22:59Z) - Graph Convolutional Network For Semi-supervised Node Classification With Subgraph Sketching [0.27624021966289597]
We propose the Graph-Learning-Dual Graph Convolutional Neural Network called GLDGCN.
We apply GLDGCN to the semi-supervised node classification task.
Compared with the baseline methods, we achieve higher classification accuracy on three citation networks.
arXiv Detail & Related papers (2024-04-19T09:08:12Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - GraphHD: Efficient graph classification using hyperdimensional computing [58.720142291102135]
We present a baseline approach for graph classification with HDC.
We evaluate GraphHD on real-world graph classification problems.
Our results show that when compared to the state-of-the-art Graph Neural Networks (GNNs) the proposed model achieves comparable accuracy.
arXiv Detail & Related papers (2022-05-16T17:32:58Z) - BGL: GPU-Efficient GNN Training by Optimizing Graph Data I/O and
Preprocessing [0.0]
Graph neural networks (GNNs) have extended the success of deep neural networks (DNNs) to non-Euclidean graph data.
Existing systems are inefficient to train large graphs with billions of nodes and edges with GPUs.
This paper proposes BGL, a distributed GNN training system designed to address the bottlenecks with a few key ideas.
arXiv Detail & Related papers (2021-12-16T00:37:37Z) - ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network [72.16255675586089]
We propose an Adaptive Curvature Exploration Hyperbolic Graph NeuralNetwork named ACE-HGNN to adaptively learn the optimal curvature according to the input graph and downstream tasks.
Experiments on multiple real-world graph datasets demonstrate a significant and consistent performance improvement in model quality with competitive performance and good generalization ability.
arXiv Detail & Related papers (2021-10-15T07:18:57Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - CAGNN: Cluster-Aware Graph Neural Networks for Unsupervised Graph
Representation Learning [19.432449825536423]
Unsupervised graph representation learning aims to learn low-dimensional node embeddings without supervision.
We present a novel cluster-aware graph neural network (CAGNN) model for unsupervised graph representation learning using self-supervised techniques.
arXiv Detail & Related papers (2020-09-03T13:57:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.