Graph Anomaly Detection with Graph Neural Networks: Current Status and
Challenges
- URL: http://arxiv.org/abs/2209.14930v1
- Date: Thu, 29 Sep 2022 16:47:57 GMT
- Title: Graph Anomaly Detection with Graph Neural Networks: Current Status and
Challenges
- Authors: Hwan Kim, Byung Suk Lee, Won-Yong Shin, Sungsu Lim
- Abstract summary: Graph neural networks (GNNs) have been studied extensively and have successfully performed difficult machine learning tasks.
This survey is the first comprehensive review of graph anomaly detection methods based on GNNs.
- Score: 9.076649460696402
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graphs are used widely to model complex systems, and detecting anomalies in a
graph is an important task in the analysis of complex systems. Graph anomalies
are patterns in a graph that do not conform to normal patterns expected of the
attributes and/or structures of the graph. In recent years, graph neural
networks (GNNs) have been studied extensively and have successfully performed
difficult machine learning tasks in node classification, link prediction, and
graph classification thanks to the highly expressive capability via message
passing in effectively learning graph representations. To solve the graph
anomaly detection problem, GNN-based methods leverage information about the
graph attributes (or features) and/or structures to learn to score anomalies
appropriately. In this survey, we review the recent advances made in detecting
graph anomalies using GNN models. Specifically, we summarize GNN-based methods
according to the graph type (i.e., static and dynamic), the anomaly type (i.e.,
node, edge, subgraph, and whole graph), and the network architecture (e.g.,
graph autoencoder, graph convolutional network). To the best of our knowledge,
this survey is the first comprehensive review of graph anomaly detection
methods based on GNNs.
Related papers
- GNNAnatomy: Systematic Generation and Evaluation of Multi-Level Explanations for Graph Neural Networks [20.05098366613674]
We introduce GNNAnatomy, a visual analytics system designed to generate and evaluate multi-level explanations for graph classification tasks.
GNNAnatomy uses graphlets, primitive graph substructures, to identify the most critical substructures in a graph class by analyzing the correlation between GNN predictions and graphlet frequencies.
We demonstrate the effectiveness of GNNAnatomy through case studies on synthetic and real-world graph datasets from sociology and biology domains.
arXiv Detail & Related papers (2024-06-06T23:09:54Z) - ADA-GAD: Anomaly-Denoised Autoencoders for Graph Anomaly Detection [84.0718034981805]
We introduce a novel framework called Anomaly-Denoised Autoencoders for Graph Anomaly Detection (ADA-GAD)
In the first stage, we design a learning-free anomaly-denoised augmentation method to generate graphs with reduced anomaly levels.
In the next stage, the decoders are retrained for detection on the original graph.
arXiv Detail & Related papers (2023-12-22T09:02:01Z) - Towards Self-Interpretable Graph-Level Anomaly Detection [73.1152604947837]
Graph-level anomaly detection (GLAD) aims to identify graphs that exhibit notable dissimilarity compared to the majority in a collection.
We propose a Self-Interpretable Graph aNomaly dETection model ( SIGNET) that detects anomalous graphs as well as generates informative explanations simultaneously.
arXiv Detail & Related papers (2023-10-25T10:10:07Z) - FoSR: First-order spectral rewiring for addressing oversquashing in GNNs [0.0]
Graph neural networks (GNNs) are able to leverage the structure of graph data by passing messages along the edges of the graph.
We propose a computationally efficient algorithm that prevents oversquashing by systematically adding edges to the graph.
We find experimentally that our algorithm outperforms existing graph rewiring methods in several graph classification tasks.
arXiv Detail & Related papers (2022-10-21T07:58:03Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Deep Graph-level Anomaly Detection by Glocal Knowledge Distillation [61.39364567221311]
Graph-level anomaly detection (GAD) describes the problem of detecting graphs that are abnormal in their structure and/or the features of their nodes.
One of the challenges in GAD is to devise graph representations that enable the detection of both locally- and globally-anomalous graphs.
We introduce a novel deep anomaly detection approach for GAD that learns rich global and local normal pattern information by joint random distillation of graph and node representations.
arXiv Detail & Related papers (2021-12-19T05:04:53Z) - Transferability Properties of Graph Neural Networks [125.71771240180654]
Graph neural networks (GNNs) are provably successful at learning representations from data supported on moderate-scale graphs.
We study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs.
Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity.
arXiv Detail & Related papers (2021-12-09T00:08:09Z) - Lifelong Graph Learning [6.282881904019272]
We bridge graph learning and lifelong learning by converting a continual graph learning problem to a regular graph learning problem.
We show that feature graph networks (FGN) achieve superior performance in two applications, i.e., lifelong human action recognition with wearable devices and feature matching.
arXiv Detail & Related papers (2020-09-01T18:21:34Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z) - Graph Signal Processing -- Part III: Machine Learning on Graphs, from
Graph Topology to Applications [19.29066508374268]
Part III of this monograph starts by addressing ways to learn graph topology.
A particular emphasis is on graph topology definition based on the correlation and precision matrices of the observed data.
For learning sparse graphs, the least absolute shrinkage and selection operator, known as LASSO is employed.
arXiv Detail & Related papers (2020-01-02T13:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.