GAT-COBO: Cost-Sensitive Graph Neural Network for Telecom Fraud
Detection
- URL: http://arxiv.org/abs/2303.17334v1
- Date: Wed, 29 Mar 2023 07:02:50 GMT
- Title: GAT-COBO: Cost-Sensitive Graph Neural Network for Telecom Fraud
Detection
- Authors: Xinxin Hu, Haotian Chen, Junjie Zhang, Hongchang Chen, Shuxin Liu,
Xing Li, Yahui Wang, and Xiangyang Xue
- Abstract summary: We propose a Graph ATtention network with COst-sensitive BOosting (GAT-COBO) for the graph imbalance problem.
Our proposed method is effective for the graph imbalance problem, outperforming the state-of-the-art GNNs and GNN-based fraud detectors.
Our model is also helpful for solving the widespread over-smoothing problem in GNNs.
- Score: 37.574237866502905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Along with the rapid evolution of mobile communication technologies, such as
5G, there has been a drastically increase in telecom fraud, which significantly
dissipates individual fortune and social wealth. In recent years, graph mining
techniques are gradually becoming a mainstream solution for detecting telecom
fraud. However, the graph imbalance problem, caused by the Pareto principle,
brings severe challenges to graph data mining. This is a new and challenging
problem, but little previous work has been noticed. In this paper, we propose a
Graph ATtention network with COst-sensitive BOosting (GAT-COBO) for the graph
imbalance problem. First, we design a GAT-based base classifier to learn the
embeddings of all nodes in the graph. Then, we feed the embeddings into a
well-designed cost-sensitive learner for imbalanced learning. Next, we update
the weights according to the misclassification cost to make the model focus
more on the minority class. Finally, we sum the node embeddings obtained by
multiple cost-sensitive learners to obtain a comprehensive node representation,
which is used for the downstream anomaly detection task. Extensive experiments
on two real-world telecom fraud detection datasets demonstrate that our
proposed method is effective for the graph imbalance problem, outperforming the
state-of-the-art GNNs and GNN-based fraud detectors. In addition, our model is
also helpful for solving the widespread over-smoothing problem in GNNs. The
GAT-COBO code and datasets are available at https://github.com/xxhu94/GAT-COBO.
Related papers
- Mitigating Degree Bias in Signed Graph Neural Networks [5.042342963087923]
Signed Graph Neural Networks (SGNNs) are up against fairness issues from source data and typical aggregation method.
In this paper, we are pioneering to make the investigation of fairness in SGNNs expanded from GNNs.
We identify the issue of degree bias within signed graphs, offering a new perspective on the fairness issues related to SGNNs.
arXiv Detail & Related papers (2024-08-16T03:22:18Z) - Fair Graph Neural Network with Supervised Contrastive Regularization [12.666235467177131]
We propose a novel model for training fairness-aware Graph Neural Networks (GNNs)
Our approach integrates Supervised Contrastive Loss and Environmental Loss to enhance both accuracy and fairness.
arXiv Detail & Related papers (2024-04-09T07:49:05Z) - Cost Sensitive GNN-based Imbalanced Learning for Mobile Social Network
Fraud Detection [37.14877936257601]
We present a novel Cost-Sensitive Graph Neural Network (CSGNN) by creatively combining cost-sensitive learning and graph neural networks.
The results show that CSGNN can effectively solve the graph imbalance problem and then achieve better detection performance than the state-of-the-art algorithms.
arXiv Detail & Related papers (2023-03-28T01:43:32Z) - Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation [68.59161853439339]
We propose a novel method for generating unlearnable graph examples.
By injecting delusive but imperceptible noise into graphs using our Error-Minimizing Structural Poisoning (EMinS) module, we are able to make the graphs unexploitable.
arXiv Detail & Related papers (2023-03-05T03:30:22Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Certified Graph Unlearning [39.29148804411811]
Graph-structured data is ubiquitous in practice and often processed using graph neural networks (GNNs)
We introduce the first known framework for emph certified graph unlearning of GNNs.
Three different types of unlearning requests need to be considered, including node feature, edge and node unlearning.
arXiv Detail & Related papers (2022-06-18T07:41:10Z) - Deep Fraud Detection on Non-attributed Graph [61.636677596161235]
Graph Neural Networks (GNNs) have shown solid performance on fraud detection.
labeled data is scarce in large-scale industrial problems, especially for fraud detection.
We propose a novel graph pre-training strategy to leverage more unlabeled data.
arXiv Detail & Related papers (2021-10-04T03:42:09Z) - An Introduction to Robust Graph Convolutional Networks [71.68610791161355]
We propose a novel Robust Graph Convolutional Neural Networks for possible erroneous single-view or multi-view data.
By incorporating an extra layers via Autoencoders into traditional graph convolutional networks, we characterize and handle typical error models explicitly.
arXiv Detail & Related papers (2021-03-27T04:47:59Z) - Stealing Links from Graph Neural Networks [72.85344230133248]
Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs)
Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection.
We propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.
arXiv Detail & Related papers (2020-05-05T13:22:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.