Inductive inference of gradient-boosted decision trees on graphs for insurance fraud detection
- URL: http://arxiv.org/abs/2510.05676v1
- Date: Tue, 07 Oct 2025 08:35:12 GMT
- Title: Inductive inference of gradient-boosted decision trees on graphs for insurance fraud detection
- Authors: Félix Vandervorst, Bruno Deprez, Wouter Verbeke, Tim Verdonck,
- Abstract summary: We present a novel inductive graph gradient boosting machine (G-GBM) for supervised learning on heterogeneous and dynamic graphs.<n>We show that our estimator competes with popular graph neural network approaches in an experiment using a variety of simulated random graphs.
- Score: 5.0564566972893505
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph-based methods are becoming increasingly popular in machine learning due to their ability to model complex data and relations. Insurance fraud is a prime use case, since false claims are often the result of organised criminals that stage accidents or the same persons filing erroneous claims on multiple policies. One challenge is that graph-based approaches struggle to find meaningful representations of the data because of the high class imbalance present in fraud data. Another is that insurance networks are heterogeneous and dynamic, given the changing relations among people, companies and policies. That is why gradient boosted tree approaches on tabular data still dominate the field. Therefore, we present a novel inductive graph gradient boosting machine (G-GBM) for supervised learning on heterogeneous and dynamic graphs. We show that our estimator competes with popular graph neural network approaches in an experiment using a variety of simulated random graphs. We demonstrate the power of G-GBM for insurance fraud detection using an open-source and a real-world, proprietary dataset. Given that the backbone model is a gradient boosting forest, we apply established explainability methods to gain better insights into the predictions made by G-GBM.
Related papers
- Grad: Guided Relation Diffusion Generation for Graph Augmentation in Graph Fraud Detection [34.04981707677924]
Fraudsters disguise themselves by mimicking the behavioral data collected by platforms.<n>This narrows the differences in behavioral traits between them and benign users within the platform's database.<n>To address this problem, we propose a relation diffusion-based graph augmentation model Grad.
arXiv Detail & Related papers (2025-12-19T23:32:36Z) - Gradient Inversion Attack on Graph Neural Networks [11.075042582118963]
Malicious attackers can steal private image data from the exchange of neural networks during federated learning.<n>This paper studies the problem of whether private data can be reconstructed from leaked gradients in both node classification and graph classification tasks.<n>Two widely used GNN frameworks are analyzed, namely GCN and GraphSAGE.
arXiv Detail & Related papers (2024-11-29T02:42:17Z) - Cluster Aware Graph Anomaly Detection [32.791460110557104]
We propose a cluster aware multi-view graph anomaly detection method, called CARE.<n>Our approach captures both local and global node affinities by augmenting the graph's adjacency matrix with the pseudo-label.<n>We show that the proposed similarity-guided loss is a variant of contrastive learning loss.
arXiv Detail & Related papers (2024-09-15T15:41:59Z) - Deceptive Fairness Attacks on Graphs via Meta Learning [102.53029537886314]
We study deceptive fairness attacks on graphs to answer the question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias deceptively?
We propose a meta learning-based framework named FATE to attack various fairness definitions and graph learning models.
We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification.
arXiv Detail & Related papers (2023-10-24T09:10:14Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - The Devil is in the Conflict: Disentangled Information Graph Neural
Networks for Fraud Detection [17.254383007779616]
We argue that the performance degradation is mainly attributed to the inconsistency between topology and attribute.
We propose a simple and effective method that uses the attention mechanism to adaptively fuse two views.
Our model can significantly outperform stateof-the-art baselines on real-world fraud detection datasets.
arXiv Detail & Related papers (2022-10-22T08:21:49Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Deep Fraud Detection on Non-attributed Graph [61.636677596161235]
Graph Neural Networks (GNNs) have shown solid performance on fraud detection.
labeled data is scarce in large-scale industrial problems, especially for fraud detection.
We propose a novel graph pre-training strategy to leverage more unlabeled data.
arXiv Detail & Related papers (2021-10-04T03:42:09Z) - Relational Graph Neural Networks for Fraud Detection in a Super-App
environment [53.561797148529664]
We propose a framework of relational graph convolutional networks methods for fraudulent behaviour prevention in the financial services of a Super-App.
We use an interpretability algorithm for graph neural networks to determine the most important relations to the classification task of the users.
Our results show that there is an added value when considering models that take advantage of the alternative data of the Super-App and the interactions found in their high connectivity.
arXiv Detail & Related papers (2021-07-29T00:02:06Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.