Modeling Dynamic Heterogeneous Network for Link Prediction using
Hierarchical Attention with Temporal RNN
- URL: http://arxiv.org/abs/2004.01024v1
- Date: Wed, 1 Apr 2020 17:16:47 GMT
- Title: Modeling Dynamic Heterogeneous Network for Link Prediction using
Hierarchical Attention with Temporal RNN
- Authors: Hansheng Xue, Luwei Yang, Wen Jiang, Yi Wei, Yi Hu, and Yu Lin
- Abstract summary: We propose a novel dynamic heterogeneous network embedding method, termed as DyHATR.
It uses hierarchical attention to learn heterogeneous information and incorporates recurrent neural networks with temporal attention to capture evolutionary patterns.
We benchmark our method on four real-world datasets for the task of link prediction.
- Score: 16.362525151483084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Network embedding aims to learn low-dimensional representations of nodes
while capturing structure information of networks. It has achieved great
success on many tasks of network analysis such as link prediction and node
classification. Most of existing network embedding algorithms focus on how to
learn static homogeneous networks effectively. However, networks in the real
world are more complex, e.g., networks may consist of several types of nodes
and edges (called heterogeneous information) and may vary over time in terms of
dynamic nodes and edges (called evolutionary patterns). Limited work has been
done for network embedding of dynamic heterogeneous networks as it is
challenging to learn both evolutionary and heterogeneous information
simultaneously. In this paper, we propose a novel dynamic heterogeneous network
embedding method, termed as DyHATR, which uses hierarchical attention to learn
heterogeneous information and incorporates recurrent neural networks with
temporal attention to capture evolutionary patterns. We benchmark our method on
four real-world datasets for the task of link prediction. Experimental results
show that DyHATR significantly outperforms several state-of-the-art baselines.
Related papers
- How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Characterizing Learning Dynamics of Deep Neural Networks via Complex
Networks [1.0869257688521987]
Complex Network Theory (CNT) represents Deep Neural Networks (DNNs) as directed weighted graphs to study them as dynamical systems.
We introduce metrics for nodes/neurons and layers, namely Nodes Strength and Layers Fluctuation.
Our framework distills trends in the learning dynamics and separates low from high accurate networks.
arXiv Detail & Related papers (2021-10-06T10:03:32Z) - Network Embedding via Deep Prediction Model [25.727377978617465]
This paper proposes a network embedding framework to capture the transfer behaviors on structured networks via deep prediction models.
A network structure embedding layer is added into conventional deep prediction models, including Long Short-Term Memory Network and Recurrent Neural Network.
Experimental studies are conducted on various datasets including social networks, citation networks, biomedical network, collaboration network and language network.
arXiv Detail & Related papers (2021-04-27T16:56:00Z) - DynACPD Embedding Algorithm for Prediction Tasks in Dynamic Networks [6.5361928329696335]
We present novel embedding methods for a dynamic network based on higher order tensor decompositions for tensorial representations of the dynamic network.
We demonstrate the power and efficiency of our approach by comparing our algorithms' performance on the link prediction task against an array of current baseline methods.
arXiv Detail & Related papers (2021-03-12T04:36:42Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z) - Link Prediction for Temporally Consistent Networks [6.981204218036187]
Link prediction estimates the next relationship in dynamic networks.
The use of adjacency matrix to represent dynamically evolving networks limits the ability to analytically learn from heterogeneous, sparse, or forming networks.
We propose a new method of canonically representing heterogeneous time-evolving activities as a temporally parameterized network model.
arXiv Detail & Related papers (2020-06-06T07:28:03Z) - Foundations and modelling of dynamic networks using Dynamic Graph Neural
Networks: A survey [11.18312489268624]
We establish a foundation of dynamic networks with consistent, detailed terminology and notation.
We present a comprehensive survey of dynamic graph neural network models using the proposed terminology.
arXiv Detail & Related papers (2020-05-13T23:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.