Learning Graph Neural Networks for Multivariate Time Series Anomaly
Detection
- URL: http://arxiv.org/abs/2111.08082v1
- Date: Mon, 15 Nov 2021 21:05:58 GMT
- Title: Learning Graph Neural Networks for Multivariate Time Series Anomaly
Detection
- Authors: Saswati Ray, Sana Lakdawala, Mononito Goswami, Chufan Gao
- Abstract summary: We propose GLUE (Graph Deviation Network with Local Uncertainty Estimation)
GLUE learns complex dependencies between variables and uses them to better identify anomalous behavior.
We also show that GLUE learns meaningful sensor embeddings which clusters similar sensors together.
- Score: 8.688578727646409
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose GLUE (Graph Deviation Network with Local Uncertainty
Estimation), building on the recently proposed Graph Deviation Network (GDN).
GLUE not only automatically learns complex dependencies between variables and
uses them to better identify anomalous behavior, but also quantifies its
predictive uncertainty, allowing us to account for the variation in the data as
well to have more interpretable anomaly detection thresholds. Results on two
real world datasets tell us that optimizing the negative Gaussian log
likelihood is reasonable because GLUE's forecasting results are at par with GDN
and in fact better than the vector autoregressor baseline, which is significant
given that GDN directly optimizes the MSE loss. In summary, our experiments
demonstrate that GLUE is competitive with GDN at anomaly detection, with the
added benefit of uncertainty estimations. We also show that GLUE learns
meaningful sensor embeddings which clusters similar sensors together.
Related papers
- LASE: Learned Adjacency Spectral Embeddings [7.612218105739107]
We learn nodal Adjacency Spectral Embeddings (ASE) from graph inputs.
LASE is interpretable, parameter efficient, robust to inputs with unobserved edges.
LASE layers combine Graph Convolutional Network (GCN) and fully-connected Graph Attention Network (GAT) modules.
arXiv Detail & Related papers (2024-12-23T17:35:19Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Graph Out-of-Distribution Generalization via Causal Intervention [69.70137479660113]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - Accurate and Scalable Estimation of Epistemic Uncertainty for Graph Neural Networks [38.17680286557666]
We propose a novel training framework designed to improve intrinsic GNN uncertainty estimates.
Our framework adapts the principle of centering data to graph data through novel graph anchoring strategies.
Our work provides insights into uncertainty estimation for GNNs, and demonstrates the utility of G-$Delta$UQ in obtaining reliable estimates.
arXiv Detail & Related papers (2024-01-07T00:58:33Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - Gaussian Gated Linear Networks [32.27304928359326]
We propose the Gaussian Gated Linear Network (G-GLN), an extension to the recently proposed GLN family of deep neural networks.
Instead of using backpropagation to learn features, GLNs have a distributed and local credit assignment mechanism based on optimizing a convex objective.
arXiv Detail & Related papers (2020-06-10T17:25:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.