An Attack Method for Medical Insurance Claim Fraud Detection based on Generative Adversarial Network
- URL: http://arxiv.org/abs/2506.19871v1
- Date: Sun, 22 Jun 2025 05:02:45 GMT
- Title: An Attack Method for Medical Insurance Claim Fraud Detection based on Generative Adversarial Network
- Authors: Yining Pang, Chenghan Li,
- Abstract summary: Insurance fraud detection represents a pivotal advancement in modern insurance service.<n>We propose a GAN-based approach to conduct adversarial attacks on fraud detection systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Insurance fraud detection represents a pivotal advancement in modern insurance service, providing intelligent and digitalized monitoring to enhance management and prevent fraud. It is crucial for ensuring the security and efficiency of insurance systems. Although AI and machine learning algorithms have demonstrated strong performance in detecting fraudulent claims, the absence of standardized defense mechanisms renders current systems vulnerable to emerging adversarial threats. In this paper, we propose a GAN-based approach to conduct adversarial attacks on fraud detection systems. Our results indicate that an attacker, without knowledge of the training data or internal model details, can generate fraudulent cases that are classified as legitimate with a 99\% attack success rate (ASR). By subtly modifying real insurance records and claims, adversaries can significantly increase the fraud risk, potentially bypassing compromised detection systems. These findings underscore the urgent need to enhance the robustness of insurance fraud detection models against adversarial manipulation, thereby ensuring the stability and reliability of different insurance systems.
Related papers
- Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention [65.47632669243657]
A dishonest institution can exploit mechanisms to discriminate or unjustly deny services under the guise of uncertainty.<n>We demonstrate the practicality of this threat by introducing an uncertainty-inducing attack called Mirage.<n>We propose Confidential Guardian, a framework that analyzes calibration metrics on a reference dataset to detect artificially suppressed confidence.
arXiv Detail & Related papers (2025-05-29T19:47:50Z) - CRUPL: A Semi-Supervised Cyber Attack Detection with Consistency Regularization and Uncertainty-aware Pseudo-Labeling in Smart Grid [0.5499796332553707]
Cyberattacks on smart grids can compromise data integrity and jeopardize the reliability of the power supply.<n>Traditional intrusion detection systems often need help to effectively detect novel and sophisticated attacks.<n>This work proposes a semi-supervised method for cyber-attack detection in smart grids by leveraging the labeled and unlabeled measurement data.
arXiv Detail & Related papers (2025-03-01T05:49:23Z) - IDU-Detector: A Synergistic Framework for Robust Masquerader Attack Detection [3.3821216642235608]
In the digital age, users store personal data in corporate databases, making data security central to enterprise management.
Given the extensive attack surface, assets face challenges like weak authentication, vulnerabilities, and malware.
We introduce the IDU-Detector, integrating Intrusion Detection Systems (IDS) with User and Entity Behavior Analytics (UEBA)
This integration monitors unauthorized access, bridges system gaps, ensures continuous monitoring, and enhances threat identification.
arXiv Detail & Related papers (2024-11-09T13:03:29Z) - Case Study: Neural Network Malware Detection Verification for Feature and Image Datasets [5.198311758274061]
We present a novel verification domain that will help to ensure tangible safeguards against adversaries.
We describe malware classification and two types of common malware datasets.
We outline the challenges and future considerations necessary for the improvement and refinement of the verification of malware classification.
arXiv Detail & Related papers (2024-04-08T17:37:22Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - A Zero Trust Framework for Realization and Defense Against Generative AI
Attacks in Power Grid [62.91192307098067]
This paper proposes a novel zero trust framework for a power grid supply chain (PGSC)
It facilitates early detection of potential GenAI-driven attack vectors, assessment of tail risk-based stability measures, and mitigation of such threats.
Experimental results show that the proposed zero trust framework achieves an accuracy of 95.7% on attack vector generation, a risk measure of 9.61% for a 95% stable PGSC, and a 99% confidence in defense against GenAI-driven attack.
arXiv Detail & Related papers (2024-03-11T02:47:21Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Certifiers Make Neural Networks Vulnerable to Availability Attacks [70.69104148250614]
We show for the first time that fallback strategies can be deliberately triggered by an adversary.
In addition to naturally occurring abstains for some inputs and perturbations, the adversary can use training-time attacks to deliberately trigger the fallback.
We design two novel availability attacks, which show the practical relevance of these threats.
arXiv Detail & Related papers (2021-08-25T15:49:10Z) - Adversarial Attacks for Tabular Data: Application to Fraud Detection and
Imbalanced Data [3.2458203725405976]
Adversarial attacks aim at producing adversarial examples, in other words, slightly modified inputs that induce the AI system to return incorrect outputs.
In this paper we illustrate a novel approach to modify and adapt state-of-the-art algorithms to imbalanced data, in the context of fraud detection.
Experimental results show that the proposed modifications lead to a perfect attack success rate.
When applied to a real-world production system, the proposed techniques shows the possibility of posing a serious threat to the robustness of advanced AI-based fraud detection procedures.
arXiv Detail & Related papers (2021-01-20T08:58:29Z) - Uncovering Insurance Fraud Conspiracy with Network Learning [34.609076567889694]
We develop a novel data-driven procedure to identify groups of organized fraudsters.
We introduce a device-sharing network among claimants.
We then develop an automated solution for fraud detection based on graph learning algorithms.
arXiv Detail & Related papers (2020-02-27T13:15:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.