Guarding Graph Neural Networks for Unsupervised Graph Anomaly Detection
- URL: http://arxiv.org/abs/2404.16366v1
- Date: Thu, 25 Apr 2024 07:09:05 GMT
- Title: Guarding Graph Neural Networks for Unsupervised Graph Anomaly Detection
- Authors: Yuanchen Bei, Sheng Zhou, Jinke Shi, Yao Ma, Haishuai Wang, Jiajun Bu,
- Abstract summary: Unsupervised graph anomaly detection aims at identifying rare patterns that deviate from the majority in a graph without the aid of labels.
Recent advances have utilized Graph Neural Networks (GNNs) to learn effective node representations.
We propose a framework for Guarding Graph Neural Networks for Unsupervised Graph Anomaly Detection (G3AD)
- Score: 16.485082741239808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised graph anomaly detection aims at identifying rare patterns that deviate from the majority in a graph without the aid of labels, which is important for a variety of real-world applications. Recent advances have utilized Graph Neural Networks (GNNs) to learn effective node representations by aggregating information from neighborhoods. This is motivated by the hypothesis that nodes in the graph tend to exhibit consistent behaviors with their neighborhoods. However, such consistency can be disrupted by graph anomalies in multiple ways. Most existing methods directly employ GNNs to learn representations, disregarding the negative impact of graph anomalies on GNNs, resulting in sub-optimal node representations and anomaly detection performance. While a few recent approaches have redesigned GNNs for graph anomaly detection under semi-supervised label guidance, how to address the adverse effects of graph anomalies on GNNs in unsupervised scenarios and learn effective representations for anomaly detection are still under-explored. To bridge this gap, in this paper, we propose a simple yet effective framework for Guarding Graph Neural Networks for Unsupervised Graph Anomaly Detection (G3AD). Specifically, G3AD introduces two auxiliary networks along with correlation constraints to guard the GNNs from inconsistent information encoding. Furthermore, G3AD introduces an adaptive caching module to guard the GNNs from solely reconstructing the observed data that contains anomalies. Extensive experiments demonstrate that our proposed G3AD can outperform seventeen state-of-the-art methods on both synthetic and real-world datasets.
Related papers
- ADA-GAD: Anomaly-Denoised Autoencoders for Graph Anomaly Detection [84.0718034981805]
We introduce a novel framework called Anomaly-Denoised Autoencoders for Graph Anomaly Detection (ADA-GAD)
In the first stage, we design a learning-free anomaly-denoised augmentation method to generate graphs with reduced anomaly levels.
In the next stage, the decoders are retrained for detection on the original graph.
arXiv Detail & Related papers (2023-12-22T09:02:01Z) - BOURNE: Bootstrapped Self-supervised Learning Framework for Unified
Graph Anomaly Detection [50.26074811655596]
We propose a novel unified graph anomaly detection framework based on bootstrapped self-supervised learning (named BOURNE)
By swapping the context embeddings between nodes and edges, we enable the mutual detection of node and edge anomalies.
BOURNE can eliminate the need for negative sampling, thereby enhancing its efficiency in handling large graphs.
arXiv Detail & Related papers (2023-07-28T00:44:57Z) - GAD-NR: Graph Anomaly Detection via Neighborhood Reconstruction [36.56631787651942]
Graph Auto-Encoders (GAEs) encode graph data into node representations and identify anomalies by assessing the reconstruction quality of the graphs based on these representations.
We propose GAD-NR, a new variant of GAE that incorporates neighborhood reconstruction for graph anomaly detection.
Extensive experimentation conducted on six real-world datasets validates the effectiveness of GAD-NR, showcasing significant improvements (by up to 30% in AUC) over state-of-the-art competitors.
arXiv Detail & Related papers (2023-06-02T23:23:34Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Graph Anomaly Detection with Unsupervised GNNs [19.772490600670363]
We design GLAM, an end-to-end graph-level anomaly detection model based on graph neural networks (GNNs)
We also propose a new pooling strategy for graph-level embedding, called MMD-pooling, that is geared toward detecting distribution anomalies which has not been considered before.
arXiv Detail & Related papers (2022-10-18T01:59:58Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Deep Graph-level Anomaly Detection by Glocal Knowledge Distillation [61.39364567221311]
Graph-level anomaly detection (GAD) describes the problem of detecting graphs that are abnormal in their structure and/or the features of their nodes.
One of the challenges in GAD is to devise graph representations that enable the detection of both locally- and globally-anomalous graphs.
We introduce a novel deep anomaly detection approach for GAD that learns rich global and local normal pattern information by joint random distillation of graph and node representations.
arXiv Detail & Related papers (2021-12-19T05:04:53Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.