GRANDE: a neural model over directed multigraphs with application to
anti-money laundering
- URL: http://arxiv.org/abs/2302.02101v1
- Date: Sat, 4 Feb 2023 05:54:25 GMT
- Title: GRANDE: a neural model over directed multigraphs with application to
anti-money laundering
- Authors: Ruofan Wu, Boqun Ma, Hong Jin, Wenlong Zhao, Weiqiang Wang, Tianyi
Zhang
- Abstract summary: We develop a novel GNN protocol that overcomes challenges via efficiently incorporating directional information.
We propose an enhancement that targets edge-related tasks using a novel message passing scheme over an extension of edge-to-node dual graph.
A concrete GNN architecture called GRANDE is derived using the proposed protocol.
- Score: 20.113306761523713
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The application of graph representation learning techniques to the area of
financial risk management (FRM) has attracted significant attention recently.
However, directly modeling transaction networks using graph neural models
remains challenging: Firstly, transaction networks are directed multigraphs by
nature, which could not be properly handled with most of the current
off-the-shelf graph neural networks (GNN). Secondly, a crucial problem in FRM
scenarios like anti-money laundering (AML) is to identify risky transactions
and is most naturally cast into an edge classification problem with rich
edge-level features, which are not fully exploited by the prevailing GNN design
that follows node-centric message passing protocols. In this paper, we present
a systematic investigation of design aspects of neural models over directed
multigraphs and develop a novel GNN protocol that overcomes the above
challenges via efficiently incorporating directional information, as well as
proposing an enhancement that targets edge-related tasks using a novel message
passing scheme over an extension of edge-to-node dual graph. A concrete GNN
architecture called GRANDE is derived using the proposed protocol, with several
further improvements and generalizations to temporal dynamic graphs. We apply
the GRANDE model to both a real-world anti-money laundering task and public
datasets. Experimental evaluations show the superiority of the proposed GRANDE
architecture over recent state-of-the-art models on dynamic graph modeling and
directed graph modeling.
Related papers
- Attentional Graph Neural Networks for Robust Massive Network
Localization [20.416879207269446]
Graph neural networks (GNNs) have emerged as a prominent tool for classification tasks in machine learning.
This paper integrates GNNs with attention mechanism to tackle a challenging nonlinear regression problem: network localization.
We first introduce a novel network localization method based on graph convolutional network (GCN), which exhibits exceptional precision even under severe non-line-of-sight (NLOS) conditions.
arXiv Detail & Related papers (2023-11-28T15:05:13Z) - Re-Think and Re-Design Graph Neural Networks in Spaces of Continuous
Graph Diffusion Functionals [7.6435511285856865]
Graph neural networks (GNNs) are widely used in domains like social networks and biological systems.
locality assumption of GNNs hampers their ability to capture long-range dependencies and global patterns in graphs.
We propose a new inductive bias based on variational analysis, drawing inspiration from the Brachchronistoe problem.
arXiv Detail & Related papers (2023-07-01T04:44:43Z) - Graph-based Multi-ODE Neural Networks for Spatio-Temporal Traffic
Forecasting [8.832864937330722]
Long-range traffic forecasting remains a challenging task due to the intricate and extensive-temporal correlations observed in traffic networks.
In this paper, we propose a architecture called Graph-based Multi-ODE Neural Networks (GRAM-ODE) which is designed with multiple connective ODE-GNN modules to learn better representations.
Our extensive set of experiments conducted on six real-world datasets demonstrate the superior performance of GRAM-ODE compared with state-of-the-art baselines.
arXiv Detail & Related papers (2023-05-30T02:10:42Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Dynamic Causal Explanation Based Diffusion-Variational Graph Neural
Network for Spatio-temporal Forecasting [60.03169701753824]
We propose a novel Dynamic Diffusion-al Graph Neural Network (DVGNN) fortemporal forecasting.
The proposed DVGNN model outperforms state-of-the-art approaches and achieves outstanding Root Mean Squared Error result.
arXiv Detail & Related papers (2023-05-16T11:38:19Z) - MGNNI: Multiscale Graph Neural Networks with Implicit Layers [53.75421430520501]
implicit graph neural networks (GNNs) have been proposed to capture long-range dependencies in underlying graphs.
We introduce and justify two weaknesses of implicit GNNs: the constrained expressiveness due to their limited effective range for capturing long-range dependencies, and their lack of ability to capture multiscale information on graphs at multiple resolutions.
We propose a multiscale graph neural network with implicit layers (MGNNI) which is able to model multiscale structures on graphs and has an expanded effective range for capturing long-range dependencies.
arXiv Detail & Related papers (2022-10-15T18:18:55Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Multi-Graph Tensor Networks [23.030263841031633]
We introduce a novel Multi-Graph Network (MGTN) framework, which exploits the ability of graphs to handle irregular data sources and the compression properties of tensor networks in a deep learning setting.
By virtue of the MGTN, a FOREX currency graph is leveraged to impose an economically meaningful structure on this demanding task, resulting in a highly superior performance against three competing models and at a drastically lower complexity.
arXiv Detail & Related papers (2020-10-25T20:14:57Z) - Hierarchical Message-Passing Graph Neural Networks [12.207978823927386]
We propose a novel Hierarchical Message-passing Graph Neural Networks framework.
Key idea is generating a hierarchical structure that re-organises all nodes in a flat graph into multi-level super graphs.
We present the first model to implement this framework, termed Hierarchical Community-aware Graph Neural Network (HC-GNN)
arXiv Detail & Related papers (2020-09-08T13:11:07Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.