MODEL: Motif-based Deep Feature Learning for Link Prediction
- URL: http://arxiv.org/abs/2008.03637v1
- Date: Sun, 9 Aug 2020 03:39:28 GMT
- Title: MODEL: Motif-based Deep Feature Learning for Link Prediction
- Authors: Lei Wang, Jing Ren, Bo Xu, Jianxin Li, Wei Luo, Feng Xia
- Abstract summary: We propose a novel embedding algorithm that incorporates network motifs to capture higher-order structures in the network.
Experiments were conducted on three types of networks: social networks, biological networks, and academic networks.
Our algorithm outperforms both the traditional similarity-based algorithms by 20% and the state-of-the-art embedding-based algorithms by 19%.
- Score: 23.12527010960999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Link prediction plays an important role in network analysis and applications.
Recently, approaches for link prediction have evolved from traditional
similarity-based algorithms into embedding-based algorithms. However, most
existing approaches fail to exploit the fact that real-world networks are
different from random networks. In particular, real-world networks are known to
contain motifs, natural network building blocks reflecting the underlying
network-generating processes. In this paper, we propose a novel embedding
algorithm that incorporates network motifs to capture higher-order structures
in the network. To evaluate its effectiveness for link prediction, experiments
were conducted on three types of networks: social networks, biological
networks, and academic networks. The results demonstrate that our algorithm
outperforms both the traditional similarity-based algorithms by 20% and the
state-of-the-art embedding-based algorithms by 19%.
Related papers
- A pseudo-likelihood approach to community detection in weighted networks [4.111899441919165]
We propose a pseudo-likelihood community estimation algorithm for networks with normally distributed edge weights.
We prove that the estimates obtained by the proposed method are consistent under the assumption of homogeneous networks.
We illustrate the method on simulated networks and on a fMRI dataset, where edge weights represent connectivity between brain regions.
arXiv Detail & Related papers (2023-03-10T13:36:10Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - Link Prediction with Contextualized Self-Supervision [63.25455976593081]
Link prediction aims to infer the existence of a link between two nodes in a network.
Traditional link prediction algorithms are hindered by three major challenges -- link sparsity, node attribute noise and network dynamics.
We propose a Contextualized Self-Supervised Learning framework that fully exploits structural context prediction for link prediction.
arXiv Detail & Related papers (2022-01-25T03:12:32Z) - Understanding the network formation pattern for better link prediction [4.8334761517444855]
We propose a novel method named Link prediction using Multiple Order Local Information (MOLI)
MOLI exploits the local information from the neighbors of different distances, with parameters that can be a prior-driven based on prior knowledge.
We show that MOLI outperforms the other 11 widely used link prediction algorithms on 11 different types of simulated and real-world networks.
arXiv Detail & Related papers (2021-10-17T15:30:04Z) - Unsupervised Domain-adaptive Hash for Networks [81.49184987430333]
Domain-adaptive hash learning has enjoyed considerable success in the computer vision community.
We develop an unsupervised domain-adaptive hash learning method for networks, dubbed UDAH.
arXiv Detail & Related papers (2021-08-20T12:09:38Z) - What can linearized neural networks actually say about generalization? [67.83999394554621]
In certain infinitely-wide neural networks, the neural tangent kernel (NTK) theory fully characterizes generalization.
We show that the linear approximations can indeed rank the learning complexity of certain tasks for neural networks.
Our work provides concrete examples of novel deep learning phenomena which can inspire future theoretical research.
arXiv Detail & Related papers (2021-06-12T13:05:11Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - DynACPD Embedding Algorithm for Prediction Tasks in Dynamic Networks [6.5361928329696335]
We present novel embedding methods for a dynamic network based on higher order tensor decompositions for tensorial representations of the dynamic network.
We demonstrate the power and efficiency of our approach by comparing our algorithms' performance on the link prediction task against an array of current baseline methods.
arXiv Detail & Related papers (2021-03-12T04:36:42Z) - Learning low-rank latent mesoscale structures in networks [1.1470070927586016]
We present a new approach for describing low-rank mesoscale structures in networks.
We use several synthetic network models and empirical friendship, collaboration, and protein--protein interaction (PPI) networks.
We show how to denoise a corrupted network by using only the latent motifs that one learns directly from the corrupted network.
arXiv Detail & Related papers (2021-02-13T18:54:49Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Modeling Dynamic Heterogeneous Network for Link Prediction using
Hierarchical Attention with Temporal RNN [16.362525151483084]
We propose a novel dynamic heterogeneous network embedding method, termed as DyHATR.
It uses hierarchical attention to learn heterogeneous information and incorporates recurrent neural networks with temporal attention to capture evolutionary patterns.
We benchmark our method on four real-world datasets for the task of link prediction.
arXiv Detail & Related papers (2020-04-01T17:16:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.