Adversarial sample generation and training using geometric masks for
accurate and resilient license plate character recognition
- URL: http://arxiv.org/abs/2311.12857v1
- Date: Wed, 25 Oct 2023 01:17:07 GMT
- Title: Adversarial sample generation and training using geometric masks for
accurate and resilient license plate character recognition
- Authors: Bishal Shrestha, Griwan Khakurel, Kritika Simkhada, Badri Adhikari
- Abstract summary: This work develops a resilient method to recognize license plate characters.
Extracting 1057 character images from 160 Nepalese vehicles, as the first step, we trained several standard deep convolutional neural networks to obtain 99.5% character classification accuracy.
Next, we enriched our dataset by generating and adding geometrically masked images, retrained our models, and investigated the models' predictions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reading dirty license plates accurately in moving vehicles is challenging for
automatic license plate recognition systems. Moreover, license plates are often
intentionally tampered with a malicious intent to avoid police apprehension.
Usually, such groups and individuals know how to fool the existing recognition
systems by making minor unnoticeable plate changes. Designing and developing
deep learning methods resilient to such real-world 'attack' practices remains
an active research problem. As a solution, this work develops a resilient
method to recognize license plate characters. Extracting 1057 character images
from 160 Nepalese vehicles, as the first step, we trained several standard deep
convolutional neural networks to obtain 99.5% character classification
accuracy. On adversarial images generated to simulate malicious tampering,
however, our model's accuracy dropped to 25%. Next, we enriched our dataset by
generating and adding geometrically masked images, retrained our models, and
investigated the models' predictions. The proposed approach of training with
generated adversarial images helped our adversarial attack-aware license plate
character recognition (AA-LPCR) model achieves an accuracy of 99.7%. This
near-perfect accuracy demonstrates that the proposed idea of random geometric
masking is highly effective for improving the accuracy of license plate
recognition models. Furthermore, by performing interpretability studies to
understand why our models work, we identify and highlight attack-prone regions
in the input character images. In sum, although Nepal's embossed license plate
detection systems are vulnerable to malicious attacks, our findings suggest
that these systems can be upgraded to close to 100% resilience.
Related papers
- Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Adaptive White-Box Watermarking with Self-Mutual Check Parameters in
Deep Neural Networks [14.039159907367985]
Fragile watermarking is a technique used to identify tampering in AI models.
Previous methods have faced challenges including risks of omission, additional information transmission, and inability to locate tampering precisely.
We propose a method for detecting tampered parameters and bits, which can be used to detect, locate, and restore parameters that have been tampered with.
arXiv Detail & Related papers (2023-08-22T07:21:06Z) - MMNet: Multi-Collaboration and Multi-Supervision Network for Sequential
Deepfake Detection [81.59191603867586]
Sequential deepfake detection aims to identify forged facial regions with the correct sequence for recovery.
The recovery of forged images requires knowledge of the manipulation model to implement inverse transformations.
We propose Multi-Collaboration and Multi-Supervision Network (MMNet) that handles various spatial scales and sequential permutations in forged face images.
arXiv Detail & Related papers (2023-07-06T02:32:08Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Benchmarking Probabilistic Deep Learning Methods for License Plate
Recognition [11.772116128679116]
We propose to model the prediction uncertainty for license plate recognition explicitly.
Experiments on synthetic noisy or blurred low-resolution images show that the predictive uncertainty reliably finds wrong predictions.
arXiv Detail & Related papers (2023-02-02T21:37:42Z) - Simulated Adversarial Testing of Face Recognition Models [53.10078734154151]
We propose a framework for learning how to test machine learning algorithms using simulators in an adversarial manner.
We are the first to show that weaknesses of models trained on real data can be discovered using simulated samples.
arXiv Detail & Related papers (2021-06-08T17:58:10Z) - Deepfake Forensics via An Adversarial Game [99.84099103679816]
We advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.
Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted by models yet difficult to generalize, we propose a new adversarial training method that attempts to blur out these specific artifacts.
arXiv Detail & Related papers (2021-03-25T02:20:08Z) - Robust SleepNets [7.23389716633927]
In this study, we investigate eye closedness detection to prevent vehicle accidents related to driver disengagements and driver drowsiness.
We develop two models to detect eye closedness: first model on eye images and a second model on face images.
We adversarially attack the models with Projected Gradient Descent, Fast Gradient Sign and DeepFool methods and report adversarial success rate.
arXiv Detail & Related papers (2021-02-24T20:48:13Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z) - Detecting Patch Adversarial Attacks with Image Residuals [9.169947558498535]
A discriminator is trained to distinguish between clean and adversarial samples.
We show that the obtained residuals act as a digital fingerprint for adversarial attacks.
Results show that the proposed detection method generalizes to previously unseen, stronger attacks.
arXiv Detail & Related papers (2020-02-28T01:28:22Z) - Adversarial Attacks on Convolutional Neural Networks in Facial
Recognition Domain [2.4704085162861693]
Adversarial attacks that render Deep Neural Network (DNN) classifiers vulnerable in real life represent a serious threat in autonomous vehicles, malware filters, or biometric authentication systems.
We apply Fast Gradient Sign Method to introduce perturbations to a facial image dataset and then test the output on a different classifier.
We craft a variety of different black-box attack algorithms on a facial image dataset assuming minimal adversarial knowledge.
arXiv Detail & Related papers (2020-01-30T00:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.