GRADE: Graph Dynamic Embedding
- URL: http://arxiv.org/abs/2007.08060v3
- Date: Mon, 10 May 2021 19:59:06 GMT
- Title: GRADE: Graph Dynamic Embedding
- Authors: Simeon Spasov, Alessandro Di Stefano, Pietro Lio, Jian Tang
- Abstract summary: GRADE is a probabilistic model that learns to generate evolving node and community representations by imposing a random walk prior to their trajectories.
Our model also learns node community membership which is updated between time steps via a transition matrix.
Experiments demonstrate GRADE outperforms baselines in dynamic link prediction, shows favourable performance on dynamic community detection, and identifies coherent and interpretable evolving communities.
- Score: 76.85156209917932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Representation learning of static and more recently dynamically evolving
graphs has gained noticeable attention. Existing approaches for modelling graph
dynamics focus extensively on the evolution of individual nodes independently
of the evolution of mesoscale community structures. As a result, current
methods do not provide useful tools to study and cannot explicitly capture
temporal community dynamics. To address this challenge, we propose GRADE - a
probabilistic model that learns to generate evolving node and community
representations by imposing a random walk prior over their trajectories. Our
model also learns node community membership which is updated between time steps
via a transition matrix. At each time step link generation is performed by
first assigning node membership from a distribution over the communities, and
then sampling a neighbor from a distribution over the nodes for the assigned
community. We parametrize the node and community distributions with neural
networks and learn their parameters via variational inference. Experiments
demonstrate GRADE outperforms baselines in dynamic link prediction, shows
favourable performance on dynamic community detection, and identifies coherent
and interpretable evolving communities.
Related papers
- Node-Time Conditional Prompt Learning In Dynamic Graphs [14.62182210205324]
We propose DYGPROMPT, a novel pre-training and prompt learning framework for dynamic graph modeling.
We recognize that node and time features mutually characterize each other, and propose dual condition-nets to model the evolving node-time patterns in downstream tasks.
arXiv Detail & Related papers (2024-05-22T19:10:24Z) - Learning Dynamic Graph Embeddings with Neural Controlled Differential
Equations [21.936437653875245]
This paper focuses on representation learning for dynamic graphs with temporal interactions.
We propose a generic differential model for dynamic graphs that characterises the continuously dynamic evolution of node embedding trajectories.
Our framework exhibits several desirable characteristics, including the ability to express dynamics on evolving graphs without integration by segments.
arXiv Detail & Related papers (2023-02-22T12:59:38Z) - Robust Knowledge Adaptation for Dynamic Graph Neural Networks [61.8505228728726]
We propose Ada-DyGNN: a robust knowledge Adaptation framework via reinforcement learning for Dynamic Graph Neural Networks.
Our approach constitutes the first attempt to explore robust knowledge adaptation via reinforcement learning.
Experiments on three benchmark datasets demonstrate that Ada-DyGNN achieves the state-of-the-art performance.
arXiv Detail & Related papers (2022-07-22T02:06:53Z) - Learning the Evolutionary and Multi-scale Graph Structure for
Multivariate Time Series Forecasting [50.901984244738806]
We show how to model the evolutionary and multi-scale interactions of time series.
In particular, we first provide a hierarchical graph structure cooperated with the dilated convolution to capture the scale-specific correlations.
A unified neural network is provided to integrate the components above to get the final prediction.
arXiv Detail & Related papers (2022-06-28T08:11:12Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - A Variational Edge Partition Model for Supervised Graph Representation
Learning [51.30365677476971]
This paper introduces a graph generative process to model how the observed edges are generated by aggregating the node interactions over a set of overlapping node communities.
We partition each edge into the summation of multiple community-specific weighted edges and use them to define community-specific GNNs.
A variational inference framework is proposed to jointly learn a GNN based inference network that partitions the edges into different communities, these community-specific GNNs, and a GNN based predictor that combines community-specific GNNs for the end classification task.
arXiv Detail & Related papers (2022-02-07T14:37:50Z) - ConTIG: Continuous Representation Learning on Temporal Interaction
Graphs [32.25218861788686]
ConTIG is a continuous representation method that captures the continuous dynamic evolution of node embedding trajectories.
Our model exploit three-fold factors in dynamic networks which include latest interaction, neighbor features and inherent characteristics.
Experiments results demonstrate the superiority of ConTIG on temporal link prediction, temporal node recommendation and dynamic node classification tasks.
arXiv Detail & Related papers (2021-09-27T12:11:24Z) - TCL: Transformer-based Dynamic Graph Modelling via Contrastive Learning [87.38675639186405]
We propose a novel graph neural network approach, called TCL, which deals with the dynamically-evolving graph in a continuous-time fashion.
To the best of our knowledge, this is the first attempt to apply contrastive learning to representation learning on dynamic graphs.
arXiv Detail & Related papers (2021-05-17T15:33:25Z) - Learning Attribute-Structure Co-Evolutions in Dynamic Graphs [28.848851822725933]
We present a novel framework called CoEvoGNN for modeling dynamic attributed graph sequence.
It preserves the impact of earlier graphs on the current graph by embedding generation through the sequence.
It has a temporal self-attention mechanism to model long-range dependencies in the evolution.
arXiv Detail & Related papers (2020-07-25T20:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.