Latent Evolution Model for Change Point Detection in Time-varying
Networks
- URL: http://arxiv.org/abs/2212.08818v1
- Date: Sat, 17 Dec 2022 07:39:35 GMT
- Title: Latent Evolution Model for Change Point Detection in Time-varying
Networks
- Authors: Yongshun Gong, Xue Dong, Jian Zhang, Meng Chen
- Abstract summary: Graph-based change point detection (CPD) play an irreplaceable role in discovering anomalous graphs in the time-varying network.
In practice, real-world graphs such as social networks, traffic networks, and rating networks are constantly evolving over time.
We propose a novel CPD method for dynamic graphs via a latent evolution model.
- Score: 11.442584422147368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph-based change point detection (CPD) play an irreplaceable role in
discovering anomalous graphs in the time-varying network. While several
techniques have been proposed to detect change points by identifying whether
there is a significant difference between the target network and successive
previous ones, they neglect the natural evolution of the network. In practice,
real-world graphs such as social networks, traffic networks, and rating
networks are constantly evolving over time. Considering this problem, we treat
the problem as a prediction task and propose a novel CPD method for dynamic
graphs via a latent evolution model. Our method focuses on learning the
low-dimensional representations of networks and capturing the evolving patterns
of these learned latent representations simultaneously. After having the
evolving patterns, a prediction of the target network can be achieved. Then, we
can detect the change points by comparing the prediction and the actual network
by leveraging a trade-off strategy, which balances the importance between the
prediction network and the normal graph pattern extracted from previous
networks. Intensive experiments conducted on both synthetic and real-world
datasets show the effectiveness and superiority of our model.
Related papers
- Out-of-Distribution Generalized Dynamic Graph Neural Network with
Disentangled Intervention and Invariance Promotion [61.751257172868186]
Dynamic graph neural networks (DyGNNs) have demonstrated powerful predictive abilities by exploiting graph and temporal dynamics.
Existing DyGNNs fail to handle distribution shifts, which naturally exist in dynamic graphs.
arXiv Detail & Related papers (2023-11-24T02:42:42Z) - DURENDAL: Graph deep learning framework for temporal heterogeneous
networks [0.5156484100374057]
Temporal heterogeneous networks (THNs) are evolving networks that characterize many real-world applications.
We propose DURENDAL, a graph deep learning framework for THNs.
arXiv Detail & Related papers (2023-09-30T10:46:01Z) - Dynamic Causal Explanation Based Diffusion-Variational Graph Neural
Network for Spatio-temporal Forecasting [60.03169701753824]
We propose a novel Dynamic Diffusion-al Graph Neural Network (DVGNN) fortemporal forecasting.
The proposed DVGNN model outperforms state-of-the-art approaches and achieves outstanding Root Mean Squared Error result.
arXiv Detail & Related papers (2023-05-16T11:38:19Z) - Influencer Detection with Dynamic Graph Neural Networks [56.1837101824783]
We investigate different dynamic Graph Neural Networks (GNNs) configurations for influencer detection.
We show that using deep multi-head attention in GNN and encoding temporal attributes significantly improves performance.
arXiv Detail & Related papers (2022-11-15T13:00:25Z) - Graph similarity learning for change-point detection in dynamic networks [15.694880385913534]
We consider dynamic networks that are temporal sequences of graph snapshots.
This task is often termed network change-point detection and has numerous applications, such as fraud detection or physical motion monitoring.
We design a method to perform online network change-point detection that can adapt to the specific network domain and localise changes with no delay.
arXiv Detail & Related papers (2022-03-29T12:16:38Z) - Handling Distribution Shifts on Graphs: An Invariance Perspective [78.31180235269035]
We formulate the OOD problem on graphs and develop a new invariant learning approach, Explore-to-Extrapolate Risk Minimization (EERM)
EERM resorts to multiple context explorers that are adversarially trained to maximize the variance of risks from multiple virtual environments.
We prove the validity of our method by theoretically showing its guarantee of a valid OOD solution.
arXiv Detail & Related papers (2022-02-05T02:31:01Z) - TempNodeEmb:Temporal Node Embedding considering temporal edge influence
matrix [0.8941624592392746]
Predicting future links among the nodes in temporal networks reveals an important aspect of the evolution of temporal networks.
Some approaches consider a simplified representation of temporal networks but in high-dimensional and generally sparse matrices.
We propose a new node embedding technique which exploits the evolving nature of the networks considering a simple three-layer graph neural network at each time step.
arXiv Detail & Related papers (2020-08-16T15:39:07Z) - On Robustness and Transferability of Convolutional Neural Networks [147.71743081671508]
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts.
We study the interplay between out-of-distribution and transfer performance of modern image classification CNNs for the first time.
We find that increasing both the training set and model sizes significantly improve the distributional shift robustness.
arXiv Detail & Related papers (2020-07-16T18:39:04Z) - Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph
Link Prediction [69.1473775184952]
We introduce a realistic problem of few-shot out-of-graph link prediction.
We tackle this problem with a novel transductive meta-learning framework.
We validate our model on multiple benchmark datasets for knowledge graph completion and drug-drug interaction prediction.
arXiv Detail & Related papers (2020-06-11T17:42:46Z) - Link Prediction for Temporally Consistent Networks [6.981204218036187]
Link prediction estimates the next relationship in dynamic networks.
The use of adjacency matrix to represent dynamically evolving networks limits the ability to analytically learn from heterogeneous, sparse, or forming networks.
We propose a new method of canonically representing heterogeneous time-evolving activities as a temporally parameterized network model.
arXiv Detail & Related papers (2020-06-06T07:28:03Z) - Modeling Dynamic Heterogeneous Network for Link Prediction using
Hierarchical Attention with Temporal RNN [16.362525151483084]
We propose a novel dynamic heterogeneous network embedding method, termed as DyHATR.
It uses hierarchical attention to learn heterogeneous information and incorporates recurrent neural networks with temporal attention to capture evolutionary patterns.
We benchmark our method on four real-world datasets for the task of link prediction.
arXiv Detail & Related papers (2020-04-01T17:16:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.