Deep Generative Attacks and Countermeasures for Data-Driven Offline Signature Verification
- URL: http://arxiv.org/abs/2312.00987v2
- Date: Wed, 17 Jul 2024 21:44:45 GMT
- Title: Deep Generative Attacks and Countermeasures for Data-Driven Offline Signature Verification
- Authors: An Ngo, Rajesh Kumar, Phuong Cao,
- Abstract summary: This study investigates the vulnerabilities of data-driven offline signature verification (DASV) systems to generative attacks.
We explore the efficacy of Variversaational Autoencoders (VAEs) and Conditional Generative Adrial Networks (CGANs) in creating deceptive signatures that challenge DASV systems.
- Score: 2.0368479127360093
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study investigates the vulnerabilities of data-driven offline signature verification (DASV) systems to generative attacks and proposes robust countermeasures. Specifically, we explore the efficacy of Variational Autoencoders (VAEs) and Conditional Generative Adversarial Networks (CGANs) in creating deceptive signatures that challenge DASV systems. Using the Structural Similarity Index (SSIM) to evaluate the quality of forged signatures, we assess their impact on DASV systems built with Xception, ResNet152V2, and DenseNet201 architectures. Initial results showed False Accept Rates (FARs) ranging from 0% to 5.47% across all models and datasets. However, exposure to synthetic signatures significantly increased FARs, with rates ranging from 19.12% to 61.64%. The proposed countermeasure, i.e., retraining the models with real + synthetic datasets, was very effective, reducing FARs between 0% and 0.99%. These findings emphasize the necessity of investigating vulnerabilities in security systems like DASV and reinforce the role of generative methods in enhancing the security of data-driven systems.
Related papers
- Optimizing Intrusion Detection System Performance Through Synergistic Hyperparameter Tuning and Advanced Data Processing [3.3148772440755527]
Intrusion detection is vital for securing computer networks against malicious activities.
To address this issue, we propose a system combining deep learning, data balancing, and high-dimensional reduction.
By training on extensive datasets like CIC IDS 2018 and CIC IDS 2017, our models demonstrate robust performance and generalization.
arXiv Detail & Related papers (2024-08-03T14:09:28Z) - Deep Learning for Network Anomaly Detection under Data Contamination: Evaluating Robustness and Mitigating Performance Degradation [0.0]
Deep learning (DL) has emerged as a crucial tool in network anomaly detection (NAD) for cybersecurity.
While DL models for anomaly detection excel at extracting features and learning patterns from data, they are vulnerable to data contamination.
This study evaluates the robustness of six unsupervised DL algorithms against data contamination.
arXiv Detail & Related papers (2024-07-11T19:47:37Z) - Enhancing IoT Security with CNN and LSTM-Based Intrusion Detection Systems [0.23408308015481666]
Our proposed model consists on a combination of convolutional neural network (CNN) and long short-term memory (LSTM) deep learning (DL) models.
This fusion facilitates the detection and classification of IoT traffic into binary categories, benign and malicious activities.
Our proposed model achieves an accuracy rate of 98.42%, accompanied by a minimal loss of 0.0275.
arXiv Detail & Related papers (2024-05-28T22:12:15Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Machine learning-based network intrusion detection for big and
imbalanced data using oversampling, stacking feature embedding and feature
extraction [6.374540518226326]
Intrusion Detection Systems (IDS) play a critical role in protecting interconnected networks by detecting malicious actors and activities.
This paper introduces a novel ML-based network intrusion detection model that uses Random Oversampling (RO) to address data imbalance and Stacking Feature Embedding (PCA) for dimension reduction.
Using the CIC-IDS 2017 dataset, DT, RF, and ET models reach 99.99% accuracy, while DT and RF models obtain 99.94% accuracy on CIC-IDS 2018 dataset.
arXiv Detail & Related papers (2024-01-22T05:49:41Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - SigScatNet: A Siamese + Scattering based Deep Learning Approach for
Signature Forgery Detection and Similarity Assessment [1.474723404975345]
This research paper introduces SigScatNet, an innovative solution to combat the surge in counterfeit signatures.
Siamese deep learning network, bolstered by Scattering wavelets, is used to detect signature forgery and assess signature similarity.
In experiments, SigScatNet yields an unparalleled Equal Error Rate of 3.689% with the ICDAR SigComp Dutch dataset and an astonishing 0.0578% with the CEDAR dataset.
arXiv Detail & Related papers (2023-11-09T18:38:46Z) - Robust Trajectory Prediction against Adversarial Attacks [84.10405251683713]
Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving systems.
These methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions.
In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks.
arXiv Detail & Related papers (2022-07-29T22:35:05Z) - Transferable, Controllable, and Inconspicuous Adversarial Attacks on
Person Re-identification With Deep Mis-Ranking [83.48804199140758]
We propose a learning-to-mis-rank formulation to perturb the ranking of the system output.
We also perform a back-box attack by developing a novel multi-stage network architecture.
Our method can control the number of malicious pixels by using differentiable multi-shot sampling.
arXiv Detail & Related papers (2020-04-08T18:48:29Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.