Adversarial sample generation and training using geometric masks for
accurate and resilient license plate character recognition
- URL: http://arxiv.org/abs/2311.12857v1
- Date: Wed, 25 Oct 2023 01:17:07 GMT
- Title: Adversarial sample generation and training using geometric masks for
accurate and resilient license plate character recognition
- Authors: Bishal Shrestha, Griwan Khakurel, Kritika Simkhada, Badri Adhikari
- Abstract summary: This work develops a resilient method to recognize license plate characters.
Extracting 1057 character images from 160 Nepalese vehicles, as the first step, we trained several standard deep convolutional neural networks to obtain 99.5% character classification accuracy.
Next, we enriched our dataset by generating and adding geometrically masked images, retrained our models, and investigated the models' predictions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reading dirty license plates accurately in moving vehicles is challenging for
automatic license plate recognition systems. Moreover, license plates are often
intentionally tampered with a malicious intent to avoid police apprehension.
Usually, such groups and individuals know how to fool the existing recognition
systems by making minor unnoticeable plate changes. Designing and developing
deep learning methods resilient to such real-world 'attack' practices remains
an active research problem. As a solution, this work develops a resilient
method to recognize license plate characters. Extracting 1057 character images
from 160 Nepalese vehicles, as the first step, we trained several standard deep
convolutional neural networks to obtain 99.5% character classification
accuracy. On adversarial images generated to simulate malicious tampering,
however, our model's accuracy dropped to 25%. Next, we enriched our dataset by
generating and adding geometrically masked images, retrained our models, and
investigated the models' predictions. The proposed approach of training with
generated adversarial images helped our adversarial attack-aware license plate
character recognition (AA-LPCR) model achieves an accuracy of 99.7%. This
near-perfect accuracy demonstrates that the proposed idea of random geometric
masking is highly effective for improving the accuracy of license plate
recognition models. Furthermore, by performing interpretability studies to
understand why our models work, we identify and highlight attack-prone regions
in the input character images. In sum, although Nepal's embossed license plate
detection systems are vulnerable to malicious attacks, our findings suggest
that these systems can be upgraded to close to 100% resilience.
Related papers
- Undermining Image and Text Classification Algorithms Using Adversarial Attacks [0.0]
Our study addresses the gap by training various machine learning models and using GANs and SMOTE to generate additional data points aimed at attacking text classification models.
Our experiments reveal a significant vulnerability in classification models. Specifically, we observe a 20 % decrease in accuracy for the top-performing text classification models post-attack, along with a 30 % decrease in facial recognition accuracy.
arXiv Detail & Related papers (2024-11-03T18:44:28Z) - Unlearn and Burn: Adversarial Machine Unlearning Requests Destroy Model Accuracy [65.80757820884476]
We expose a critical yet underexplored vulnerability in the deployment of unlearning systems.
We present a threat model where an attacker can degrade model accuracy by submitting adversarial unlearning requests for data not present in the training set.
We evaluate various verification mechanisms to detect the legitimacy of unlearning requests and reveal the challenges in verification.
arXiv Detail & Related papers (2024-10-12T16:47:04Z) - Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Adaptive White-Box Watermarking with Self-Mutual Check Parameters in
Deep Neural Networks [14.039159907367985]
Fragile watermarking is a technique used to identify tampering in AI models.
Previous methods have faced challenges including risks of omission, additional information transmission, and inability to locate tampering precisely.
We propose a method for detecting tampered parameters and bits, which can be used to detect, locate, and restore parameters that have been tampered with.
arXiv Detail & Related papers (2023-08-22T07:21:06Z) - Benchmarking Probabilistic Deep Learning Methods for License Plate
Recognition [11.772116128679116]
We propose to model the prediction uncertainty for license plate recognition explicitly.
Experiments on synthetic noisy or blurred low-resolution images show that the predictive uncertainty reliably finds wrong predictions.
arXiv Detail & Related papers (2023-02-02T21:37:42Z) - Deepfake Forensics via An Adversarial Game [99.84099103679816]
We advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.
Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted by models yet difficult to generalize, we propose a new adversarial training method that attempts to blur out these specific artifacts.
arXiv Detail & Related papers (2021-03-25T02:20:08Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z) - Detecting Patch Adversarial Attacks with Image Residuals [9.169947558498535]
A discriminator is trained to distinguish between clean and adversarial samples.
We show that the obtained residuals act as a digital fingerprint for adversarial attacks.
Results show that the proposed detection method generalizes to previously unseen, stronger attacks.
arXiv Detail & Related papers (2020-02-28T01:28:22Z) - Adversarial Attacks on Convolutional Neural Networks in Facial
Recognition Domain [2.4704085162861693]
Adversarial attacks that render Deep Neural Network (DNN) classifiers vulnerable in real life represent a serious threat in autonomous vehicles, malware filters, or biometric authentication systems.
We apply Fast Gradient Sign Method to introduce perturbations to a facial image dataset and then test the output on a different classifier.
We craft a variety of different black-box attack algorithms on a facial image dataset assuming minimal adversarial knowledge.
arXiv Detail & Related papers (2020-01-30T00:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.