Towards Self-Interpretable Graph-Level Anomaly Detection
- URL: http://arxiv.org/abs/2310.16520v1
- Date: Wed, 25 Oct 2023 10:10:07 GMT
- Title: Towards Self-Interpretable Graph-Level Anomaly Detection
- Authors: Yixin Liu, Kaize Ding, Qinghua Lu, Fuyi Li, Leo Yu Zhang, Shirui Pan
- Abstract summary: Graph-level anomaly detection (GLAD) aims to identify graphs that exhibit notable dissimilarity compared to the majority in a collection.
We propose a Self-Interpretable Graph aNomaly dETection model ( SIGNET) that detects anomalous graphs as well as generates informative explanations simultaneously.
- Score: 73.1152604947837
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph-level anomaly detection (GLAD) aims to identify graphs that exhibit
notable dissimilarity compared to the majority in a collection. However,
current works primarily focus on evaluating graph-level abnormality while
failing to provide meaningful explanations for the predictions, which largely
limits their reliability and application scope. In this paper, we investigate a
new challenging problem, explainable GLAD, where the learning objective is to
predict the abnormality of each graph sample with corresponding explanations,
i.e., the vital subgraph that leads to the predictions. To address this
challenging problem, we propose a Self-Interpretable Graph aNomaly dETection
model (SIGNET for short) that detects anomalous graphs as well as generates
informative explanations simultaneously. Specifically, we first introduce the
multi-view subgraph information bottleneck (MSIB) framework, serving as the
design basis of our self-interpretable GLAD approach. This way SIGNET is able
to not only measure the abnormality of each graph based on cross-view mutual
information but also provide informative graph rationales by extracting
bottleneck subgraphs from the input graph and its dual hypergraph in a
self-supervised way. Extensive experiments on 16 datasets demonstrate the
anomaly detection capability and self-interpretability of SIGNET.
Related papers
- UMGAD: Unsupervised Multiplex Graph Anomaly Detection [40.17829938834783]
We propose a novel Unsupervised Multiplex Graph Anomaly Detection method, named UMGAD.
We first learn multi-relational correlations among nodes in multiplex heterogeneous graphs.
Then, to weaken the influence of noise and redundant information on abnormal information extraction, we generate attribute-level and subgraph-level augmented-view graphs.
arXiv Detail & Related papers (2024-11-19T15:15:45Z) - Multitask Active Learning for Graph Anomaly Detection [48.690169078479116]
We propose a novel MultItask acTIve Graph Anomaly deTEction framework, namely MITIGATE.
By coupling node classification tasks, MITIGATE obtains the capability to detect out-of-distribution nodes without known anomalies.
Empirical studies on four datasets demonstrate that MITIGATE significantly outperforms the state-of-the-art methods for anomaly detection.
arXiv Detail & Related papers (2024-01-24T03:43:45Z) - Few-shot Message-Enhanced Contrastive Learning for Graph Anomaly
Detection [15.757864894708364]
Graph anomaly detection plays a crucial role in identifying exceptional instances in graph data that deviate significantly from the majority.
We propose a novel few-shot Graph Anomaly Detection model called FMGAD.
We show that FMGAD can achieve better performance than other state-of-the-art methods, regardless of artificially injected anomalies or domain-organic anomalies.
arXiv Detail & Related papers (2023-11-17T07:49:20Z) - Graph Anomaly Detection with Graph Neural Networks: Current Status and
Challenges [9.076649460696402]
Graph neural networks (GNNs) have been studied extensively and have successfully performed difficult machine learning tasks.
This survey is the first comprehensive review of graph anomaly detection methods based on GNNs.
arXiv Detail & Related papers (2022-09-29T16:47:57Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - From Unsupervised to Few-shot Graph Anomaly Detection: A Multi-scale Contrastive Learning Approach [26.973056364587766]
Anomaly detection from graph data is an important data mining task in many applications such as social networks, finance, and e-commerce.
We propose a novel framework, graph ANomaly dEtection framework with Multi-scale cONtrastive lEarning (ANEMONE in short)
By using a graph neural network as a backbone to encode the information from multiple graph scales (views), we learn better representation for nodes in a graph.
arXiv Detail & Related papers (2022-02-11T09:45:11Z) - Deep Graph-level Anomaly Detection by Glocal Knowledge Distillation [61.39364567221311]
Graph-level anomaly detection (GAD) describes the problem of detecting graphs that are abnormal in their structure and/or the features of their nodes.
One of the challenges in GAD is to devise graph representations that enable the detection of both locally- and globally-anomalous graphs.
We introduce a novel deep anomaly detection approach for GAD that learns rich global and local normal pattern information by joint random distillation of graph and node representations.
arXiv Detail & Related papers (2021-12-19T05:04:53Z) - Recognizing Predictive Substructures with Subgraph Information
Bottleneck [97.19131149357234]
We propose a novel subgraph information bottleneck (SIB) framework to recognize such subgraphs, named IB-subgraph.
Intractability of mutual information and the discrete nature of graph data makes the objective of SIB notoriously hard to optimize.
Experiments on graph learning and large-scale point cloud tasks demonstrate the superior property of IB-subgraph.
arXiv Detail & Related papers (2021-03-20T11:19:43Z) - Graph Information Bottleneck for Subgraph Recognition [103.37499715761784]
We propose a framework of Graph Information Bottleneck (GIB) for the subgraph recognition problem in deep graph learning.
Under this framework, one can recognize the maximally informative yet compressive subgraph, named IB-subgraph.
We evaluate the properties of the IB-subgraph in three application scenarios: improvement of graph classification, graph interpretation and graph denoising.
arXiv Detail & Related papers (2020-10-12T09:32:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.