BRIGHT -- Graph Neural Networks in Real-Time Fraud Detection
- URL: http://arxiv.org/abs/2205.13084v1
- Date: Wed, 25 May 2022 23:51:27 GMT
- Title: BRIGHT -- Graph Neural Networks in Real-Time Fraud Detection
- Authors: Mingxuan Lu, Zhichao Han, Susie Xi Rao, Zitao Zhang, Yang Zhao, Yinan
Shan, Ramesh Raghunathan, Ce Zhang, Jiawei Jiang
- Abstract summary: We propose a Batch and Real-time Inception GrapH Topology (BRIGHT) framework to conduct an end-to-end GNN learning.
BRIGHT framework consists of a graph transformation module and a corresponding GNN architecture.
Our experiments show that BRIGHT outperforms the baseline models by >2% in average w.r.t.precision.
- Score: 23.226891472871248
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting fraudulent transactions is an essential component to control risk
in e-commerce marketplaces. Apart from rule-based and machine learning filters
that are already deployed in production, we want to enable efficient real-time
inference with graph neural networks (GNNs), which is useful to catch multihop
risk propagation in a transaction graph. However, two challenges arise in the
implementation of GNNs in production. First, future information in a dynamic
graph should not be considered in message passing to predict the past. Second,
the latency of graph query and GNN model inference is usually up to hundreds of
milliseconds, which is costly for some critical online services. To tackle
these challenges, we propose a Batch and Real-time Inception GrapH Topology
(BRIGHT) framework to conduct an end-to-end GNN learning that allows efficient
online real-time inference. BRIGHT framework consists of a graph transformation
module (Two-Stage Directed Graph) and a corresponding GNN architecture (Lambda
Neural Network). The Two-Stage Directed Graph guarantees that the information
passed through neighbors is only from the historical payment transactions. It
consists of two subgraphs representing historical relationships and real-time
links, respectively. The Lambda Neural Network decouples inference into two
stages: batch inference of entity embeddings and real-time inference of
transaction prediction. Our experiments show that BRIGHT outperforms the
baseline models by >2\% in average w.r.t.~precision. Furthermore, BRIGHT is
computationally efficient for real-time fraud detection. Regarding end-to-end
performance (including neighbor query and inference), BRIGHT can reduce the P99
latency by >75\%. For the inference stage, our speedup is on average
7.8$\times$ compared to the traditional GNN.
Related papers
- Certified Graph Unlearning [39.29148804411811]
Graph-structured data is ubiquitous in practice and often processed using graph neural networks (GNNs)
We introduce the first known framework for emph certified graph unlearning of GNNs.
Three different types of unlearning requests need to be considered, including node feature, edge and node unlearning.
arXiv Detail & Related papers (2022-06-18T07:41:10Z) - Distributed Graph Neural Network Training with Periodic Historical
Embedding Synchronization [9.503080586294406]
Graph Neural Networks (GNNs) are prevalent in various applications such as social network, recommender systems, and knowledge graphs.
Traditional sampling-based methods accelerate GNN by dropping edges and nodes, which impairs the graph integrity and model performance.
This paper proposes DIstributed Graph Embedding SynchronizaTion (DIGEST), a novel distributed GNN training framework.
arXiv Detail & Related papers (2022-05-31T18:44:53Z) - Evidential Temporal-aware Graph-based Social Event Detection via
Dempster-Shafer Theory [76.4580340399321]
We propose ETGNN, a novel Evidential Temporal-aware Graph Neural Network.
We construct view-specific graphs whose nodes are the texts and edges are determined by several types of shared elements respectively.
Considering the view-specific uncertainty, the representations of all views are converted into mass functions through evidential deep learning (EDL) neural networks.
arXiv Detail & Related papers (2022-05-24T16:22:40Z) - AEGNN: Asynchronous Event-based Graph Neural Networks [54.528926463775946]
Event-based Graph Neural Networks generalize standard GNNs to process events as "evolving"-temporal graphs.
AEGNNs are easily trained on synchronous inputs and can be converted to efficient, "asynchronous" networks at test time.
arXiv Detail & Related papers (2022-03-31T16:21:12Z) - Training Free Graph Neural Networks for Graph Matching [103.45755859119035]
TFGM is a framework to boost the performance of Graph Neural Networks (GNNs) based graph matching without training.
Applying TFGM on various GNNs shows promising improvements over baselines.
arXiv Detail & Related papers (2022-01-14T09:04:46Z) - Task and Model Agnostic Adversarial Attack on Graph Neural Networks [8.075575946756759]
Graph neural networks (GNNs) have witnessed significant adoption in the industry owing to impressive performance on various predictive tasks.
In this work, we investigate this aspect for GNNs, identify vulnerabilities, and link them to graph properties that may potentially lead to the development of more secure and robust GNNs.
arXiv Detail & Related papers (2021-12-25T18:39:21Z) - Deep Fraud Detection on Non-attributed Graph [61.636677596161235]
Graph Neural Networks (GNNs) have shown solid performance on fraud detection.
labeled data is scarce in large-scale industrial problems, especially for fraud detection.
We propose a novel graph pre-training strategy to leverage more unlabeled data.
arXiv Detail & Related papers (2021-10-04T03:42:09Z) - Neural Network Branch-and-Bound for Neural Network Verification [26.609606492971967]
We propose a novel machine learning framework that can be used for designing an effective branching strategy.
We learn two graph neural networks (GNNs) that both directly treat the network we want to verify as a graph input.
We show that our GNN models generalize well to harder properties on larger unseen networks.
arXiv Detail & Related papers (2021-07-27T14:42:57Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Scaling Graph Neural Networks with Approximate PageRank [64.92311737049054]
We present the PPRGo model which utilizes an efficient approximation of information diffusion in GNNs.
In addition to being faster, PPRGo is inherently scalable, and can be trivially parallelized for large datasets like those found in industry settings.
We show that training PPRGo and predicting labels for all nodes in this graph takes under 2 minutes on a single machine, far outpacing other baselines on the same graph.
arXiv Detail & Related papers (2020-07-03T09:30:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.