Revisiting Graph Contrastive Learning for Anomaly Detection
- URL: http://arxiv.org/abs/2305.02496v1
- Date: Thu, 4 May 2023 01:57:07 GMT
- Title: Revisiting Graph Contrastive Learning for Anomaly Detection
- Authors: Zhiyuan Liu, Chunjie Cao, Fangjian Tao and Jingzhang Sun
- Abstract summary: Existing graph contrastive anomaly detection methods primarily focus on graph augmentation and multi-scale contrast modules.
We propose Multi-GNN and Augmented Graph contrastive framework MAG, which unifies the existing GCAD methods in the contrastive self-supervised perspective.
Our study sheds light on the drawback of the existing GCAD methods and demonstrates the potential of multi-GNN and graph augmentation modules.
- Score: 14.09889920588769
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combining Graph neural networks (GNNs) with contrastive learning for anomaly
detection has drawn rising attention recently. Existing graph contrastive
anomaly detection (GCAD) methods have primarily focused on improving detection
capability through graph augmentation and multi-scale contrast modules.
However, the underlying mechanisms of how these modules work have not been
fully explored. We dive into the multi-scale and graph augmentation mechanism
and observed that multi-scale contrast modules do not enhance the expression,
while the multi-GNN modules are the hidden contributors. Previous studies have
tended to attribute the benefits brought by multi-GNN to the multi-scale
modules. In the paper, we delve into the misconception and propose Multi-GNN
and Augmented Graph contrastive framework MAG, which unified the existing GCAD
methods in the contrastive self-supervised perspective. We extracted two
variants from the MAG framework, L-MAG and M-MAG. The L-MAG is the lightweight
instance of the MAG, which outperform the state-of-the-art on Cora and Pubmed
with the low computational cost. The variant M-MAG equipped with multi-GNN
modules further improve the detection performance. Our study sheds light on the
drawback of the existing GCAD methods and demonstrates the potential of
multi-GNN and graph augmentation modules. Our code is available at
https://github.com/liuyishoua/MAG-Framework.
Related papers
- MAGNet: A Multi-Scale Attention-Guided Graph Fusion Network for DRC Violation Detection [0.5261718469769449]
Design rule checking (DRC) is of great significance for cost reduction and design efficiency improvement in IC designs.<n>We propose MAGNet, a hybrid deep learning model that integrates an improved U-Net with a graph neural network for DRC prediction.<n>Overall, MAGNet effectively combines spatial, semantic, and structural information, achieving improved prediction accuracy and reduced false positive rates in DRC hotspot detection.
arXiv Detail & Related papers (2025-06-08T13:13:41Z) - On Vanishing Gradients, Over-Smoothing, and Over-Squashing in GNNs: Bridging Recurrent and Graph Learning [15.409865070022951]
Graph Neural Networks (GNNs) are models that leverage the graph structure to transmit information between nodes.
We show that a simple state-space formulation of a GNN effectively alleviates over-smoothing and over-squashing at no extra trainable parameter cost.
arXiv Detail & Related papers (2025-02-15T14:43:41Z) - Classifier-guided Gradient Modulation for Enhanced Multimodal Learning [50.7008456698935]
Gradient-Guided Modulation (CGGM) is a novel method to balance multimodal learning with gradients.
We conduct extensive experiments on four multimodal datasets: UPMC-Food 101, CMU-MOSI, IEMOCAP and BraTS.
CGGM outperforms all the baselines and other state-of-the-art methods consistently.
arXiv Detail & Related papers (2024-11-03T02:38:43Z) - MM-GTUNets: Unified Multi-Modal Graph Deep Learning for Brain Disorders Prediction [8.592259720470697]
We propose MM-GTUNets, an end-to-end graph transformer based multi-modal graph deep learning framework for brain disorders prediction.
We introduce Modality Reward Representation Learning (MRRL) which adaptively constructs population graphs using a reward system.
We also propose Adaptive Cross-Modal Graph Learning (ACMGL), which captures critical modality-specific and modality-shared features.
arXiv Detail & Related papers (2024-06-20T16:14:43Z) - Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs [77.42221150848535]
We propose a novel message passing function called Multiset to Multiset GNN(M2M-GNN)
Our theoretical analyses and extensive experiments demonstrate that M2M-GNN effectively alleviates the aforementioned limitations of SMP, yielding superior performance in comparison.
arXiv Detail & Related papers (2024-05-31T07:39:22Z) - MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models [61.479419734006825]
We introduce MAGDi, a new method for structured distillation of the reasoning interactions between multiple Large Language Model (LLM) agents into smaller LMs.
Experiments on seven widely used commonsense and math reasoning benchmarks show that MAGDi improves the reasoning capabilities of smaller models.
We conduct extensive analyses to show that MAGDi enhances the generalizability to out-of-domain tasks, scales positively with the size and strength of the base student model, and obtains larger improvements when applying self-consistency.
arXiv Detail & Related papers (2024-02-02T18:35:14Z) - Momentum Gradient-based Untargeted Attack on Hypergraph Neural Networks [17.723282166737867]
Hypergraph Neural Networks (HGNNs) have been successfully applied in various hypergraph-related tasks.
Recent works have shown that deep learning models are vulnerable to adversarial attacks.
We design a new HGNNs attack model for the untargeted attack, namely MGHGA, which focuses on modifying node features.
arXiv Detail & Related papers (2023-10-24T09:10:45Z) - MGNNI: Multiscale Graph Neural Networks with Implicit Layers [53.75421430520501]
implicit graph neural networks (GNNs) have been proposed to capture long-range dependencies in underlying graphs.
We introduce and justify two weaknesses of implicit GNNs: the constrained expressiveness due to their limited effective range for capturing long-range dependencies, and their lack of ability to capture multiscale information on graphs at multiple resolutions.
We propose a multiscale graph neural network with implicit layers (MGNNI) which is able to model multiscale structures on graphs and has an expanded effective range for capturing long-range dependencies.
arXiv Detail & Related papers (2022-10-15T18:18:55Z) - Gradient Gating for Deep Multi-Rate Learning on Graphs [62.25886489571097]
We present Gradient Gating (G$2$), a novel framework for improving the performance of Graph Neural Networks (GNNs)
Our framework is based on gating the output of GNN layers with a mechanism for multi-rate flow of message passing information across nodes of the underlying graph.
arXiv Detail & Related papers (2022-10-02T13:19:48Z) - sMGC: A Complex-Valued Graph Convolutional Network via Magnetic
Laplacian for Directed Graphs [10.993455818148341]
We propose magnetic Laplacian that preserves edge directionality by encoding it into complex phase as a deformation of the Laplacian.
In addition, we design an Auto-Regressive Moving-Average filter that is capable of learning global features from graphs.
We derive complex-valued operations in graph neural network and devise a simplified Magnetic Graph Convolution network, namely sMGC.
arXiv Detail & Related papers (2021-10-14T17:36:44Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.