A Black-Box Attack on Optical Character Recognition Systems
- URL: http://arxiv.org/abs/2208.14302v1
- Date: Tue, 30 Aug 2022 14:36:27 GMT
- Title: A Black-Box Attack on Optical Character Recognition Systems
- Authors: Samet Bayram and Kenneth Barner
- Abstract summary: Adversarial machine learning is an emerging area showing the vulnerability of deep learning models.
In this paper, we propose a simple yet efficient attack method, Efficient Combinatorial Black-box Adversarial Attack, on binary image classifiers.
We validate the efficiency of the attack technique on two different data sets and three classification networks, demonstrating its performance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Adversarial machine learning is an emerging area showing the vulnerability of
deep learning models. Exploring attack methods to challenge state of the art
artificial intelligence (A.I.) models is an area of critical concern. The
reliability and robustness of such A.I. models are one of the major concerns
with an increasing number of effective adversarial attack methods.
Classification tasks are a major vulnerable area for adversarial attacks. The
majority of attack strategies are developed for colored or gray-scaled images.
Consequently, adversarial attacks on binary image recognition systems have not
been sufficiently studied. Binary images are simple two possible pixel-valued
signals with a single channel. The simplicity of binary images has a
significant advantage compared to colored and gray scaled images, namely
computation efficiency. Moreover, most optical character recognition systems
(O.C.R.s), such as handwritten character recognition, plate number
identification, and bank check recognition systems, use binary images or
binarization in their processing steps. In this paper, we propose a simple yet
efficient attack method, Efficient Combinatorial Black-box Adversarial Attack,
on binary image classifiers. We validate the efficiency of the attack technique
on two different data sets and three classification networks, demonstrating its
performance. Furthermore, we compare our proposed method with state-of-the-art
methods regarding advantages and disadvantages as well as applicability.
Related papers
- AICAttack: Adversarial Image Captioning Attack with Attention-Based
Optimization [13.99541041673674]
We present a novel adversarial attack strategy, which we call AICAttack.
operating within a black-box attack scenario, our algorithm requires no access to the target model's architecture, parameters, or gradient information.
We demonstrate AICAttack's effectiveness through extensive experiments on benchmark datasets with multiple victim models.
arXiv Detail & Related papers (2024-02-19T08:27:23Z) - Cross-Modality Perturbation Synergy Attack for Person Re-identification [70.44850060727474]
The main challenge in cross-modality ReID lies in effectively dealing with visual differences between different modalities.
Existing attack methods have primarily focused on the characteristics of the visible image modality.
This study proposes a universal perturbation attack specifically designed for cross-modality ReID.
arXiv Detail & Related papers (2024-01-18T15:56:23Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - A Perturbation Resistant Transformation and Classification System for
Deep Neural Networks [0.685316573653194]
Deep convolutional neural networks accurately classify a diverse range of natural images, but may be easily deceived when designed.
In this paper, we design a multi-pronged training, unbounded input transformation, and image ensemble system that is attack and not easily estimated.
arXiv Detail & Related papers (2022-08-25T02:58:47Z) - Deep Bayesian Image Set Classification: A Defence Approach against
Adversarial Attacks [32.48820298978333]
Deep neural networks (DNNs) are susceptible to be fooled with nearly high confidence by an adversary.
In practice, the vulnerability of deep learning systems against carefully perturbed images, known as adversarial examples, poses a dire security threat in the physical world applications.
We propose a robust deep Bayesian image set classification as a defence framework against a broad range of adversarial attacks.
arXiv Detail & Related papers (2021-08-23T14:52:44Z) - FACESEC: A Fine-grained Robustness Evaluation Framework for Face
Recognition Systems [49.577302852655144]
FACESEC is a framework for fine-grained robustness evaluation of face recognition systems.
We study five face recognition systems in both closed-set and open-set settings.
We find that accurate knowledge of neural architecture is significantly more important than knowledge of the training data in black-box attacks.
arXiv Detail & Related papers (2021-04-08T23:00:25Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Adversarial Attacks on Binary Image Recognition Systems [78.78811131936622]
We study adversarial attacks on models for binary (i.e. black and white) image classification.
In contrast to colored and grayscale images, the search space of attacks on binary images is extremely restricted.
We introduce a new attack algorithm called SCAR, designed to fool classifiers of binary images.
arXiv Detail & Related papers (2020-10-22T14:57:42Z) - Encoding Power Traces as Images for Efficient Side-Channel Analysis [0.0]
Side-Channel Attacks (SCAs) are a powerful method to attack implementations of cryptographic algorithms.
Deep Learning (DL) methods have been introduced to simplify SCAs and simultaneously lowering the amount of required side-channel traces for a successful attack.
We present a novel technique to interpret 1D traces as 2D images.
arXiv Detail & Related papers (2020-04-23T08:00:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.