GloDyNE: Global Topology Preserving Dynamic Network Embedding
- URL: http://arxiv.org/abs/2008.01935v4
- Date: Mon, 6 Dec 2021 03:05:41 GMT
- Title: GloDyNE: Global Topology Preserving Dynamic Network Embedding
- Authors: Chengbin Hou, Han Zhang, Shan He, Ke Tang
- Abstract summary: Dynamic Network Embedding (DNE) aims to update node embeddings while preserving network topology at each time step.
We propose a novel strategy to diversely select the representative nodes over a network, which is coordinated with a new incremental learning paradigm.
Experiments show GloDyNE, with a small fraction of nodes being selected, can already achieve the superior or comparable performance.
- Score: 31.269883917366478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning low-dimensional topological representation of a network in dynamic
environments is attracting much attention due to the time-evolving nature of
many real-world networks. The main and common objective of Dynamic Network
Embedding (DNE) is to efficiently update node embeddings while preserving
network topology at each time step. The idea of most existing DNE methods is to
capture the topological changes at or around the most affected nodes (instead
of all nodes) and accordingly update node embeddings. Unfortunately, this kind
of approximation, although can improve efficiency, cannot effectively preserve
the global topology of a dynamic network at each time step, due to not
considering the inactive sub-networks that receive accumulated topological
changes propagated via the high-order proximity. To tackle this challenge, we
propose a novel node selecting strategy to diversely select the representative
nodes over a network, which is coordinated with a new incremental learning
paradigm of Skip-Gram based embedding approach. The extensive experiments show
GloDyNE, with a small fraction of nodes being selected, can already achieve the
superior or comparable performance w.r.t. the state-of-the-art DNE methods in
three typical downstream tasks. Particularly, GloDyNE significantly outperforms
other methods in the graph reconstruction task, which demonstrates its ability
of global topology preservation. The source code is available at
https://github.com/houchengbin/GloDyNE
Related papers
- Discovering Message Passing Hierarchies for Mesh-Based Physics Simulation [61.89682310797067]
We introduce DHMP, which learns Dynamic Hierarchies for Message Passing networks through a differentiable node selection method.
Our experiments demonstrate the effectiveness of DHMP, achieving 22.7% improvement on average compared to recent fixed-hierarchy message passing networks.
arXiv Detail & Related papers (2024-10-03T15:18:00Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - Collaborative Graph Neural Networks for Attributed Network Embedding [63.39495932900291]
Graph neural networks (GNNs) have shown prominent performance on attributed network embedding.
We propose COllaborative graph Neural Networks--CONN, a tailored GNN architecture for network embedding.
arXiv Detail & Related papers (2023-07-22T04:52:27Z) - Accelerating Dynamic Network Embedding with Billions of Parameter
Updates to Milliseconds [27.98359191399847]
We propose a novel dynamic network embedding paradigm that rotates and scales the axes of embedding space instead of a node-by-node update.
Specifically, we propose the Dynamic Adjacency Matrix Factorization (DAMF) algorithm, which achieves an efficient and accurate dynamic network embedding.
Experiments of node classification, link prediction, and graph reconstruction on different-sized dynamic graphs suggest that DAMF advances dynamic network embedding.
arXiv Detail & Related papers (2023-06-15T09:02:17Z) - Temporal Aggregation and Propagation Graph Neural Networks for Dynamic
Representation [67.26422477327179]
Temporal graphs exhibit dynamic interactions between nodes over continuous time.
We propose a novel method of temporal graph convolution with the whole neighborhood.
Our proposed TAP-GNN outperforms existing temporal graph methods by a large margin in terms of both predictive performance and online inference latency.
arXiv Detail & Related papers (2023-04-15T08:17:18Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - DINE: A Framework for Deep Incomplete Network Embedding [33.97952453310253]
We propose a Deep Incomplete Network Embedding method, namely DINE.
We first complete the missing part including both nodes and edges in a partially observable network by using the expectation-maximization framework.
We evaluate DINE over three networks on multi-label classification and link prediction tasks.
arXiv Detail & Related papers (2020-08-09T04:59:35Z) - Online Dynamic Network Embedding [26.203786679460528]
We propose an algorithm Recurrent Neural Network Embedding (RNNE) to deal with dynamic network.
RNNE takes into account both static and dynamic characteristics of the network.
We evaluate RNNE on five networks and compare with several state-of-the-art algorithms.
arXiv Detail & Related papers (2020-06-30T02:21:37Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.