TSAM: Temporal Link Prediction in Directed Networks based on
Self-Attention Mechanism
- URL: http://arxiv.org/abs/2008.10021v1
- Date: Sun, 23 Aug 2020 11:56:40 GMT
- Title: TSAM: Temporal Link Prediction in Directed Networks based on
Self-Attention Mechanism
- Authors: Jinsong Li, Jianhua Peng, Shuxin Liu, Lintianran Weng, Cong Li
- Abstract summary: We propose a deep learning model based on graph neural networks (GCN) and self-attention mechanism, namely TSAM.
We run comparative experiments on four realistic networks to validate the effectiveness of TSAM.
- Score: 2.5144068869465994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of graph neural networks (GCN) makes it possible to learn
structural features from evolving complex networks. Even though a wide range of
realistic networks are directed ones, few existing works investigated the
properties of directed and temporal networks. In this paper, we address the
problem of temporal link prediction in directed networks and propose a deep
learning model based on GCN and self-attention mechanism, namely TSAM. The
proposed model adopts an autoencoder architecture, which utilizes graph
attentional layers to capture the structural feature of neighborhood nodes, as
well as a set of graph convolutional layers to capture motif features. A graph
recurrent unit layer with self-attention is utilized to learn temporal
variations in the snapshot sequence. We run comparative experiments on four
realistic networks to validate the effectiveness of TSAM. Experimental results
show that TSAM outperforms most benchmarks under two evaluation metrics.
Related papers
- Contrastive Representation Learning for Dynamic Link Prediction in Temporal Networks [1.9389881806157312]
We introduce a self-supervised method for learning representations of temporal networks.
We propose a recurrent message-passing neural network architecture for modeling the information flow over time-respecting paths of temporal networks.
The proposed method is tested on Enron, COLAB, and Facebook datasets.
arXiv Detail & Related papers (2024-08-22T22:50:46Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - A Network Classification Method based on Density Time Evolution Patterns
Extracted from Network Automata [0.0]
We propose alternate sources of information to use as descriptor for the classification, which we denominate as density time-evolution pattern (D-TEP) and state density time-evolution pattern (SD-TEP)
Our results show a significant improvement compared to previous studies at five synthetic network databases and also seven real world databases.
arXiv Detail & Related papers (2022-11-18T15:27:26Z) - Temporal Graph Network Embedding with Causal Anonymous Walks
Representations [54.05212871508062]
We propose a novel approach for dynamic network representation learning based on Temporal Graph Network.
For evaluation, we provide a benchmark pipeline for the evaluation of temporal network embeddings.
We show the applicability and superior performance of our model in the real-world downstream graph machine learning task provided by one of the top European banks.
arXiv Detail & Related papers (2021-08-19T15:39:52Z) - Network Embedding via Deep Prediction Model [25.727377978617465]
This paper proposes a network embedding framework to capture the transfer behaviors on structured networks via deep prediction models.
A network structure embedding layer is added into conventional deep prediction models, including Long Short-Term Memory Network and Recurrent Neural Network.
Experimental studies are conducted on various datasets including social networks, citation networks, biomedical network, collaboration network and language network.
arXiv Detail & Related papers (2021-04-27T16:56:00Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - DAIS: Automatic Channel Pruning via Differentiable Annealing Indicator
Search [55.164053971213576]
convolutional neural network has achieved great success in fulfilling computer vision tasks despite large computation overhead.
Structured (channel) pruning is usually applied to reduce the model redundancy while preserving the network structure.
Existing structured pruning methods require hand-crafted rules which may lead to tremendous pruning space.
arXiv Detail & Related papers (2020-11-04T07:43:01Z) - The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network
Architectures [179.66117325866585]
We investigate a design space that is usually overlooked, i.e. adjusting the channel configurations of predefined networks.
We find that this adjustment can be achieved by shrinking widened baseline networks and leads to superior performance.
Experiments are conducted on various networks and datasets for image classification, visual tracking and image restoration.
arXiv Detail & Related papers (2020-06-29T17:59:26Z) - Link Prediction for Temporally Consistent Networks [6.981204218036187]
Link prediction estimates the next relationship in dynamic networks.
The use of adjacency matrix to represent dynamically evolving networks limits the ability to analytically learn from heterogeneous, sparse, or forming networks.
We propose a new method of canonically representing heterogeneous time-evolving activities as a temporally parameterized network model.
arXiv Detail & Related papers (2020-06-06T07:28:03Z) - Modeling Dynamic Heterogeneous Network for Link Prediction using
Hierarchical Attention with Temporal RNN [16.362525151483084]
We propose a novel dynamic heterogeneous network embedding method, termed as DyHATR.
It uses hierarchical attention to learn heterogeneous information and incorporates recurrent neural networks with temporal attention to capture evolutionary patterns.
We benchmark our method on four real-world datasets for the task of link prediction.
arXiv Detail & Related papers (2020-04-01T17:16:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.