Global and Local Confidence Based Fraud Detection Graph Neural Network
- URL: http://arxiv.org/abs/2407.17333v1
- Date: Wed, 24 Jul 2024 14:55:37 GMT
- Title: Global and Local Confidence Based Fraud Detection Graph Neural Network
- Authors: Jiaxun Liu, Yue Tian, Guanjun Liu,
- Abstract summary: The Global and Local Confidence Graph Neural Network (GLC-GNN) is an innovative approach to graph-based anomaly detection.
By introducing a prototype to encapsulate the global features of a graph, GLC-GNN effectively distinguishes between benign and fraudulent nodes.
GLC-GNN demonstrates superior performance over state-of-the-art models in accuracy and convergence speed, while maintaining a compact model size and expedited training process.
- Score: 3.730504020733928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents the Global and Local Confidence Graph Neural Network (GLC-GNN), an innovative approach to graph-based anomaly detection that addresses the challenges of heterophily and camouflage in fraudulent activities. By introducing a prototype to encapsulate the global features of a graph and calculating a Global Confidence (GC) value for each node, GLC-GNN effectively distinguishes between benign and fraudulent nodes. The model leverages GC to generate attention values for message aggregation, enhancing its ability to capture both homophily and heterophily. Through extensive experiments on four open datasets, GLC-GNN demonstrates superior performance over state-of-the-art models in accuracy and convergence speed, while maintaining a compact model size and expedited training process. The integration of global and local confidence measures in GLC-GNN offers a robust solution for detecting anomalies in graphs, with significant implications for fraud detection across diverse domains.
Related papers
- Graph Neural Networks Powered by Encoder Embedding for Improved Node Learning [17.31465642587528]
Graph neural networks (GNNs) have emerged as a powerful framework for a wide range of node-level graph learning tasks.<n>In this paper, we leverage a statistically grounded method, one-hot graph encoder embedding (GEE), to generate high-quality initial node features.<n>We demonstrate its effectiveness through extensive simulations and real-world experiments across both unsupervised and supervised settings.
arXiv Detail & Related papers (2025-07-15T21:01:54Z) - Is Graph Convolution Always Beneficial For Every Feature? [14.15740180531667]
Topological Feature Informativeness (TFI) is a novel metric to distinguish between GNN-favored and GNN-disfavored features.
We propose a simple yet effective Graph Feature Selection (GFS) method, which processes GNN-favored and GNN-disfavored features separately.
arXiv Detail & Related papers (2024-11-12T09:28:55Z) - Guarding Graph Neural Networks for Unsupervised Graph Anomaly Detection [16.485082741239808]
Unsupervised graph anomaly detection aims at identifying rare patterns that deviate from the majority in a graph without the aid of labels.
Recent advances have utilized Graph Neural Networks (GNNs) to learn effective node representations.
We propose a framework for Guarding Graph Neural Networks for Unsupervised Graph Anomaly Detection (G3AD)
arXiv Detail & Related papers (2024-04-25T07:09:05Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Superiority of GNN over NN in generalizing bandlimited functions [6.3151583550712065]
Graph Neural Networks (GNNs) have emerged as formidable resources for processing graph-based information across diverse applications.
In this study, we investigate the proficiency of GNNs for such classifications, which can also be cast as a function problem.
Our findings highlight a pronounced efficiency in utilizing GNNs to generalize a bandlimited function within an $varepsilon$-error margin.
arXiv Detail & Related papers (2022-06-13T05:15:12Z) - Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with
Heterophily [58.76759997223951]
We propose a new metric based on von Neumann entropy to re-examine the heterophily problem of GNNs.
We also propose a Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on heterophily datasets.
arXiv Detail & Related papers (2022-03-19T14:26:43Z) - Feature Correlation Aggregation: on the Path to Better Graph Neural
Networks [37.79964911718766]
Prior to the introduction of Graph Neural Networks (GNNs), modeling and analyzing irregular data, particularly graphs, was thought to be the Achilles' heel of deep learning.
This paper introduces a central node permutation variant function through a frustratingly simple and innocent-looking modification to the core operation of a GNN.
A tangible boost in performance of the model is observed where the model surpasses previous state-of-the-art results by a significant margin while employing fewer parameters.
arXiv Detail & Related papers (2021-09-20T05:04:26Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Identity-aware Graph Neural Networks [63.6952975763946]
We develop a class of message passing Graph Neural Networks (ID-GNNs) with greater expressive power than the 1-WL test.
ID-GNN extends existing GNN architectures by inductively considering nodes' identities during message passing.
We show that transforming existing GNNs to ID-GNNs yields on average 40% accuracy improvement on challenging node, edge, and graph property prediction tasks.
arXiv Detail & Related papers (2021-01-25T18:59:01Z) - Learning Graph Neural Networks with Approximate Gradient Descent [24.49427608361397]
Two types of graph neural networks (GNNs) are investigated, depending on whether labels are attached to nodes or graphs.
A comprehensive framework for designing and analyzing convergence of GNN training algorithms is developed.
The proposed algorithm guarantees a linear convergence rate to the underlying true parameters of GNNs.
arXiv Detail & Related papers (2020-12-07T02:54:48Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Infinitely Wide Graph Convolutional Networks: Semi-supervised Learning
via Gaussian Processes [144.6048446370369]
Graph convolutional neural networks(GCNs) have recently demonstrated promising results on graph-based semi-supervised classification.
We propose a GP regression model via GCNs(GPGC) for graph-based semi-supervised learning.
We conduct extensive experiments to evaluate GPGC and demonstrate that it outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2020-02-26T10:02:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.