Adversarial Attacks on Binary Image Recognition Systems
- URL: http://arxiv.org/abs/2010.11782v1
- Date: Thu, 22 Oct 2020 14:57:42 GMT
- Title: Adversarial Attacks on Binary Image Recognition Systems
- Authors: Eric Balkanski, Harrison Chase, Kojin Oshiba, Alexander Rilee, Yaron
Singer, Richard Wang
- Abstract summary: We study adversarial attacks on models for binary (i.e. black and white) image classification.
In contrast to colored and grayscale images, the search space of attacks on binary images is extremely restricted.
We introduce a new attack algorithm called SCAR, designed to fool classifiers of binary images.
- Score: 78.78811131936622
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We initiate the study of adversarial attacks on models for binary (i.e. black
and white) image classification. Although there has been a great deal of work
on attacking models for colored and grayscale images, little is known about
attacks on models for binary images. Models trained to classify binary images
are used in text recognition applications such as check processing, license
plate recognition, invoice processing, and many others. In contrast to colored
and grayscale images, the search space of attacks on binary images is extremely
restricted and noise cannot be hidden with minor perturbations in each pixel.
Thus, the optimization landscape of attacks on binary images introduces new
fundamental challenges.
In this paper we introduce a new attack algorithm called SCAR, designed to
fool classifiers of binary images. We show that SCAR significantly outperforms
existing $L_0$ attacks applied to the binary setting and use it to demonstrate
the vulnerability of real-world text recognition systems. SCAR's strong
performance in practice contrasts with the existence of classifiers that are
provably robust to large perturbations. In many cases, altering a single pixel
is sufficient to trick Tesseract, a popular open-source text recognition
system, to misclassify a word as a different word in the English dictionary. We
also license software from providers of check processing systems to most of the
major US banks and demonstrate the vulnerability of check recognitions for
mobile deposits. These systems are substantially harder to fool since they
classify both the handwritten amounts in digits and letters, independently.
Nevertheless, we generalize SCAR to design attacks that fool state-of-the-art
check processing systems using unnoticeable perturbations that lead to
misclassification of deposit amounts. Consequently, this is a powerful method
to perform financial fraud.
Related papers
- Neuromorphic Synergy for Video Binarization [54.195375576583864]
Bimodal objects serve as a visual form to embed information that can be easily recognized by vision systems.
Neuromorphic cameras offer new capabilities for alleviating motion blur, but it is non-trivial to first de-blur and then binarize the images in a real-time manner.
We propose an event-based binary reconstruction method that leverages the prior knowledge of the bimodal target's properties to perform inference independently in both event space and image space.
We also develop an efficient integration method to propagate this binary image to high frame rate binary video.
arXiv Detail & Related papers (2024-02-20T01:43:51Z) - Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - I See Dead People: Gray-Box Adversarial Attack on Image-To-Text Models [0.0]
We present a gray-box adversarial attack on image-to-text, both untargeted and targeted.
Our attack operates in a gray-box manner, requiring no knowledge about the decoder module.
We also show that our attacks fool the popular open-source platform Hugging Face.
arXiv Detail & Related papers (2023-06-13T07:35:28Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - A Black-Box Attack on Optical Character Recognition Systems [0.0]
Adversarial machine learning is an emerging area showing the vulnerability of deep learning models.
In this paper, we propose a simple yet efficient attack method, Efficient Combinatorial Black-box Adversarial Attack, on binary image classifiers.
We validate the efficiency of the attack technique on two different data sets and three classification networks, demonstrating its performance.
arXiv Detail & Related papers (2022-08-30T14:36:27Z) - Detecting Adversaries, yet Faltering to Noise? Leveraging Conditional
Variational AutoEncoders for Adversary Detection in the Presence of Noisy
Images [0.7734726150561086]
Conditional Variational AutoEncoders (CVAE) are surprisingly good at detecting imperceptible image perturbations.
We show how CVAEs can be effectively used to detect adversarial attacks on image classification networks.
arXiv Detail & Related papers (2021-11-28T20:36:27Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Two-stage generative adversarial networks for document image
binarization with color noise and background removal [7.639067237772286]
We propose a two-stage color document image enhancement and binarization method using generative adversarial neural networks.
In the first stage, four color-independent adversarial networks are trained to extract color foreground information from an input image.
In the second stage, two independent adversarial networks with global and local features are trained for image binarization of documents of variable size.
arXiv Detail & Related papers (2020-10-20T07:51:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.