NODDLE: Node2vec based deep learning model for link prediction
- URL: http://arxiv.org/abs/2305.16421v1
- Date: Thu, 25 May 2023 18:43:52 GMT
- Title: NODDLE: Node2vec based deep learning model for link prediction
- Authors: Kazi Zainab Khanam, Aditya Singhal, and Vijay Mago
- Abstract summary: We propose NODDLE (integration of NOde2vec anD Deep Learning mEthod), a deep learning model which incorporates the features extracted by node2vec and feeds them into a hidden neural network.
Experimental results show that this method yields better results than the traditional methods on various social network datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computing the probability of an edge's existence in a graph network is known
as link prediction. While traditional methods calculate the similarity between
two given nodes in a static network, recent research has focused on evaluating
networks that evolve dynamically. Although deep learning techniques and network
representation learning algorithms, such as node2vec, show remarkable
improvements in prediction accuracy, the Stochastic Gradient Descent (SGD)
method of node2vec tends to fall into a mediocre local optimum value due to a
shortage of prior network information, resulting in failure to capture the
global structure of the network. To tackle this problem, we propose NODDLE
(integration of NOde2vec anD Deep Learning mEthod), a deep learning model which
incorporates the features extracted by node2vec and feeds them into a four
layer hidden neural network. NODDLE takes advantage of adaptive learning
optimizers such as Adam, Adamax, Adadelta, and Adagrad to improve the
performance of link prediction. Experimental results show that this method
yields better results than the traditional methods on various social network
datasets.
Related papers
- Sparse Decomposition of Graph Neural Networks [20.768412002413843]
We propose an approach to reduce the number of nodes that are included during aggregation.
We achieve this through a sparse decomposition, learning to approximate node representations using a weighted sum of linearly transformed features.
We demonstrate via extensive experiments that our method outperforms other baselines designed for inference speedup.
arXiv Detail & Related papers (2024-10-25T17:52:16Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Invertible Neural Networks for Graph Prediction [22.140275054568985]
In this work, we address conditional generation using deep invertible neural networks.
We adopt an end-to-end training approach since our objective is to address prediction and generation in the forward and backward processes at once.
arXiv Detail & Related papers (2022-06-02T17:28:33Z) - Neural Structured Prediction for Inductive Node Classification [29.908759584092167]
This paper studies node classification in the inductive setting, aiming to learn a model on labeled training graphs and generalize it to infer node labels on unlabeled test graphs.
We present a new approach called the Structured Proxy Network (SPN), which combines the advantages of both worlds.
arXiv Detail & Related papers (2022-04-15T15:50:27Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - Semi-supervised Impedance Inversion by Bayesian Neural Network Based on
2-d CNN Pre-training [0.966840768820136]
We improve the semi-supervised learning from two aspects.
First, by replacing 1-d convolutional neural network layers in deep learning structure with 2-d CNN layers and 2-d maxpooling layers, the prediction accuracy is improved.
Second, prediction uncertainty can also be estimated by embedding the network into a Bayesian inference framework.
arXiv Detail & Related papers (2021-11-20T14:12:05Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.