Inductive Representation Learning in Temporal Networks via Causal
Anonymous Walks
- URL: http://arxiv.org/abs/2101.05974v2
- Date: Sat, 20 Feb 2021 05:29:12 GMT
- Title: Inductive Representation Learning in Temporal Networks via Causal
Anonymous Walks
- Authors: Yanbang Wang, Yen-Yu Chang, Yunyu Liu, Jure Leskovec, Pan Li
- Abstract summary: Temporal networks serve as abstractions of many real-world dynamic systems.
We propose Causal Anonymous Walks (CAWs) to inductively represent a temporal network.
CAWs are extracted by temporal random walks and work as automatic retrieval of temporal network motifs.
- Score: 51.79552974355547
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Temporal networks serve as abstractions of many real-world dynamic systems.
These networks typically evolve according to certain laws, such as the law of
triadic closure, which is universal in social networks. Inductive
representation learning of temporal networks should be able to capture such
laws and further be applied to systems that follow the same laws but have not
been unseen during the training stage. Previous works in this area depend on
either network node identities or rich edge attributes and typically fail to
extract these laws. Here, we propose Causal Anonymous Walks (CAWs) to
inductively represent a temporal network. CAWs are extracted by temporal random
walks and work as automatic retrieval of temporal network motifs to represent
network dynamics while avoiding the time-consuming selection and counting of
those motifs. CAWs adopt a novel anonymization strategy that replaces node
identities with the hitting counts of the nodes based on a set of sampled walks
to keep the method inductive, and simultaneously establish the correlation
between motifs. We further propose a neural-network model CAW-N to encode CAWs,
and pair it with a CAW sampling strategy with constant memory and time cost to
support online training and inference. CAW-N is evaluated to predict links over
6 real temporal networks and uniformly outperforms previous SOTA methods by
averaged 15% AUC gain in the inductive setting. CAW-N also outperforms previous
methods in 5 out of the 6 networks in the transductive setting.
Related papers
- Spatial-Temporal Graph Representation Learning for Tactical Networks Future State Prediction [2.0517097336236283]
We introduce the Spatial-Temporal Graph-Decoder (STGED) framework for Tactical Communication Networks.
STGED hierarchically utilizes graph-based attention mechanism to spatially encode a series of communication network states.
We demonstrate that STGED consistently outperforms baseline models by large margins across different time-steps input.
arXiv Detail & Related papers (2024-03-20T15:27:17Z) - NAC-TCN: Temporal Convolutional Networks with Causal Dilated
Neighborhood Attention for Emotion Understanding [60.74434735079253]
We propose a method known as Neighborhood Attention with Convolutions TCN (NAC-TCN)
We accomplish this by introducing a causal version of Dilated Neighborhood Attention while incorporating it with convolutions.
Our model achieves comparable, better, or state-of-the-art performance over TCNs, TCAN, LSTMs, and GRUs while requiring fewer parameters on standard emotion recognition datasets.
arXiv Detail & Related papers (2023-12-12T18:41:30Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - DyCSC: Modeling the Evolutionary Process of Dynamic Networks Based on
Cluster Structure [1.005130974691351]
We propose a novel temporal network embedding method named Dynamic Cluster Structure Constraint model (DyCSC)
DyCSC captures the evolution of temporal networks by imposing a temporal constraint on the tendency of the nodes in the network to a given number of clusters.
It consistently outperforms competing methods by significant margins in multiple temporal link prediction tasks.
arXiv Detail & Related papers (2022-10-23T10:23:08Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Spiking Generative Adversarial Networks With a Neural Network
Discriminator: Local Training, Bayesian Models, and Continual Meta-Learning [31.78005607111787]
Training neural networks to reproduce spiking patterns is a central problem in neuromorphic computing.
This work proposes to train SNNs so as to match spiking signals rather than individual spiking signals.
arXiv Detail & Related papers (2021-11-02T17:20:54Z) - TempNodeEmb:Temporal Node Embedding considering temporal edge influence
matrix [0.8941624592392746]
Predicting future links among the nodes in temporal networks reveals an important aspect of the evolution of temporal networks.
Some approaches consider a simplified representation of temporal networks but in high-dimensional and generally sparse matrices.
We propose a new node embedding technique which exploits the evolving nature of the networks considering a simple three-layer graph neural network at each time step.
arXiv Detail & Related papers (2020-08-16T15:39:07Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Link Prediction for Temporally Consistent Networks [6.981204218036187]
Link prediction estimates the next relationship in dynamic networks.
The use of adjacency matrix to represent dynamically evolving networks limits the ability to analytically learn from heterogeneous, sparse, or forming networks.
We propose a new method of canonically representing heterogeneous time-evolving activities as a temporally parameterized network model.
arXiv Detail & Related papers (2020-06-06T07:28:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.