Towards Anomaly-Aware Pre-Training and Fine-Tuning for Graph Anomaly Detection
- URL: http://arxiv.org/abs/2504.14250v2
- Date: Fri, 16 May 2025 11:28:49 GMT
- Title: Towards Anomaly-Aware Pre-Training and Fine-Tuning for Graph Anomaly Detection
- Authors: Yunhui Liu, Jiashun Cheng, Yiqing Lin, Qizhuo Xie, Jia Li, Fugee Tsung, Hongzhi Yin, Tao Zheng, Jianhua Zhao, Tieke He,
- Abstract summary: Graph anomaly detection (GAD) has garnered increasing attention in recent years, yet remains challenging due to two key factors.<n>Anomaly-Aware Pre-Training and Fine-Tuning (APF) is a framework to mitigate the challenges in GAD.<n> Comprehensive experiments on 10 benchmark datasets validate the superior performance of APF in comparison to state-of-the-art baselines.
- Score: 59.042018542376596
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph anomaly detection (GAD) has garnered increasing attention in recent years, yet remains challenging due to two key factors: (1) label scarcity stemming from the high cost of annotations and (2) homophily disparity at node and class levels. In this paper, we introduce Anomaly-Aware Pre-Training and Fine-Tuning (APF), a targeted and effective framework to mitigate the above challenges in GAD. In the pre-training stage, APF incorporates node-specific subgraphs selected via the Rayleigh Quotient, a label-free anomaly metric, into the learning objective to enhance anomaly awareness. It further introduces two learnable spectral polynomial filters to jointly learn dual representations that capture both general semantics and subtle anomaly cues. During fine-tuning, a gated fusion mechanism adaptively integrates pre-trained representations across nodes and dimensions, while an anomaly-aware regularization loss encourages abnormal nodes to preserve more anomaly-relevant information. Furthermore, we theoretically show that APF tends to achieve linear separability under mild conditions. Comprehensive experiments on 10 benchmark datasets validate the superior performance of APF in comparison to state-of-the-art baselines.
Related papers
- Unsupervised Graph Anomaly Detection via Multi-Hypersphere Heterophilic Graph Learning [7.277472116667557]
Graph Anomaly (GAD) plays a vital role in various data mining applications such as e-commerce fraud prevention and malicious user detection.<n>We propose a Heterophilic Graph Detection (HGE) module to learn distinguishable abnormal representations for potential anomalies.<n>Then, we propose a Multi-Hypersphere Learning (MHL) module to enhance the detection capability for context-dependent anomalies.
arXiv Detail & Related papers (2025-03-15T08:08:13Z) - Decoupled Graph Energy-based Model for Node Out-of-Distribution Detection on Heterophilic Graphs [61.226857589092]
OOD detection on nodes in graph learning remains underexplored.<n>GNNSafe adapted energy-based detection to the graph domain with state-of-the-art performance.<n>We introduce DeGEM, which decomposes the learning process into two parts: a graph encoder that leverages topology information for node representations and an energy head that operates in latent space.
arXiv Detail & Related papers (2025-02-25T07:20:00Z) - Semi-supervised Anomaly Detection with Extremely Limited Labels in Dynamic Graphs [5.415950005432774]
We propose a novel GAD framework (EL$2$-DGAD) to tackle anomaly detection problem in dynamic graphs with extremely limited labels.<n>Specifically, a transformer-based graph encoder model is designed to more effectively preserve evolving graph structures beyond the local neighborhood.
arXiv Detail & Related papers (2025-01-25T02:35:48Z) - UMGAD: Unsupervised Multiplex Graph Anomaly Detection [40.17829938834783]
We propose a novel Unsupervised Multiplex Graph Anomaly Detection method, named UMGAD.<n>We first learn multi-relational correlations among nodes in multiplex heterogeneous graphs.<n>Then, to further extract abnormal information, we generate attribute-level and subgraph-level augmented-view graphs.
arXiv Detail & Related papers (2024-11-19T15:15:45Z) - Dual-Frequency Filtering Self-aware Graph Neural Networks for Homophilic and Heterophilic Graphs [60.82508765185161]
We propose Dual-Frequency Filtering Self-aware Graph Neural Networks (DFGNN)
DFGNN integrates low-pass and high-pass filters to extract smooth and detailed topological features.
It dynamically adjusts filtering ratios to accommodate both homophilic and heterophilic graphs.
arXiv Detail & Related papers (2024-11-18T04:57:05Z) - Partitioning Message Passing for Graph Fraud Detection [57.928658584067556]
Label imbalance and homophily-heterophily mixture are the fundamental problems encountered when applying Graph Neural Networks (GNNs) to Graph Fraud Detection (GFD) tasks.<n>Existing GNN-based GFD models are designed to augment graph structure to accommodate the inductive bias of GNNs towards homophily.<n>In our work, we argue that the key to applying GNNs for GFD is not to exclude but to em distinguish neighbors with different labels.
arXiv Detail & Related papers (2024-11-16T11:30:53Z) - Node-wise Filtering in Graph Neural Networks: A Mixture of Experts Approach [58.8524608686851]
Graph Neural Networks (GNNs) have proven to be highly effective for node classification tasks across diverse graph structural patterns.
Traditionally, GNNs employ a uniform global filter, typically a low-pass filter for homophilic graphs and a high-pass filter for heterophilic graphs.
We introduce a novel GNN framework Node-MoE that utilizes a mixture of experts to adaptively select the appropriate filters for different nodes.
arXiv Detail & Related papers (2024-06-05T17:12:38Z) - Alleviating Structural Distribution Shift in Graph Anomaly Detection [70.1022676681496]
Graph anomaly detection (GAD) is a challenging binary classification problem.
Gallon neural networks (GNNs) benefit the classification of normals from aggregating homophilous neighbors.
We propose a framework to mitigate the effect of heterophilous neighbors and make them invariant.
arXiv Detail & Related papers (2024-01-25T13:07:34Z) - ADA-GAD: Anomaly-Denoised Autoencoders for Graph Anomaly Detection [84.0718034981805]
We introduce a novel framework called Anomaly-Denoised Autoencoders for Graph Anomaly Detection (ADA-GAD)
In the first stage, we design a learning-free anomaly-denoised augmentation method to generate graphs with reduced anomaly levels.
In the next stage, the decoders are retrained for detection on the original graph.
arXiv Detail & Related papers (2023-12-22T09:02:01Z) - BOURNE: Bootstrapped Self-supervised Learning Framework for Unified
Graph Anomaly Detection [50.26074811655596]
We propose a novel unified graph anomaly detection framework based on bootstrapped self-supervised learning (named BOURNE)
By swapping the context embeddings between nodes and edges, we enable the mutual detection of node and edge anomalies.
BOURNE can eliminate the need for negative sampling, thereby enhancing its efficiency in handling large graphs.
arXiv Detail & Related papers (2023-07-28T00:44:57Z) - GPatcher: A Simple and Adaptive MLP Model for Alleviating Graph
Heterophily [15.93465948768545]
We demystify the impact of graph heterophily on graph neural networks (GNNs) filters.
We propose a simple yet powerful GNN named GPatcher by leveraging the patch-Mixer architectures.
Our model demonstrates outstanding performance on node classification compared with popular homophily GNNs and state-of-the-art heterophily GNNs.
arXiv Detail & Related papers (2023-06-25T20:57:35Z) - Heterophily-Aware Graph Attention Network [42.640057865981156]
Graph Neural Networks (GNNs) have shown remarkable success in graph representation learning.
Existing heterophilic GNNs tend to ignore the modeling of heterophily of each edge, which is also a vital part in tackling the heterophily problem.
We propose a novel Heterophily-Aware Graph Attention Network (HA-GAT) by fully exploring and utilizing the local distribution as the underlying heterophily.
arXiv Detail & Related papers (2023-02-07T03:21:55Z) - Node-oriented Spectral Filtering for Graph Neural Networks [38.0315325181726]
Graph neural networks (GNNs) have shown remarkable performance on homophilic graph data.
In general, learning a universal spectral filter on the graph from the global perspective may still suffer from great difficulty in adapting to the variation of local patterns.
We propose the node-oriented spectral filtering for graph neural network (namely NFGNN)
arXiv Detail & Related papers (2022-12-07T14:15:28Z) - Cross-Domain Graph Anomaly Detection via Anomaly-aware Contrastive
Alignment [22.769474986808113]
Cross-domain graph anomaly detection (CD-GAD) describes the problem of detecting anomalous nodes in an unlabelled target graph.
We introduce a novel domain adaptation approach, namely Anomaly-aware Contrastive alignmenT (ACT) for GAD.
ACT achieves substantially improved detection performance over 10 state-of-the-art GAD methods.
arXiv Detail & Related papers (2022-12-02T11:21:48Z) - Unveiling Anomalous Edges and Nominal Connectivity of Attributed
Networks [53.56901624204265]
The present work deals with uncovering anomalous edges in attributed graphs using two distinct formulations with complementary strengths.
The first relies on decomposing the graph data matrix into low rank plus sparse components to improve markedly performance.
The second broadens the scope of the first by performing robust recovery of the unperturbed graph, which enhances the anomaly identification performance.
arXiv Detail & Related papers (2021-04-17T20:00:40Z) - Beyond Low-Pass Filters: Adaptive Feature Propagation on Graphs [6.018995094882323]
Graph neural networks (GNNs) have been extensively studied for prediction tasks on graphs.
Most GNNs assume local homophily, i.e., strong similarities in localneighborhoods.
We propose a flexible GNN model, which is capable of handling any graphs without beingrestricted by their underlying homophily.
arXiv Detail & Related papers (2021-03-26T00:35:36Z) - Bayesian Graph Neural Networks with Adaptive Connection Sampling [62.51689735630133]
We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs)
The proposed framework not only alleviates over-smoothing and over-fitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs.
arXiv Detail & Related papers (2020-06-07T07:06:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.