When Vision Fails: Text Attacks Against ViT and OCR
- URL: http://arxiv.org/abs/2306.07033v1
- Date: Mon, 12 Jun 2023 11:26:08 GMT
- Title: When Vision Fails: Text Attacks Against ViT and OCR
- Authors: Nicholas Boucher, Jenny Blessing, Ilia Shumailov, Ross Anderson,
Nicolas Papernot
- Abstract summary: We show that text-based machine learning models are still vulnerable to visual adversarial examples encoded as text.
We show how a genetic algorithm can be used to generate visual adversarial examples in a black-box setting.
We demonstrate the effectiveness of these attacks in the real world by creating adversarial examples against production models published by Facebook, Microsoft, IBM, and Google.
- Score: 25.132777620934768
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: While text-based machine learning models that operate on visual inputs of
rendered text have become robust against a wide range of existing attacks, we
show that they are still vulnerable to visual adversarial examples encoded as
text. We use the Unicode functionality of combining diacritical marks to
manipulate encoded text so that small visual perturbations appear when the text
is rendered. We show how a genetic algorithm can be used to generate visual
adversarial examples in a black-box setting, and conduct a user study to
establish that the model-fooling adversarial examples do not affect human
comprehension. We demonstrate the effectiveness of these attacks in the real
world by creating adversarial examples against production models published by
Facebook, Microsoft, IBM, and Google.
Related papers
- SA-Attack: Improving Adversarial Transferability of Vision-Language
Pre-training Models via Self-Augmentation [56.622250514119294]
In contrast to white-box adversarial attacks, transfer attacks are more reflective of real-world scenarios.
We propose a self-augment-based transfer attack method, termed SA-Attack.
arXiv Detail & Related papers (2023-12-08T09:08:50Z) - I See Dead People: Gray-Box Adversarial Attack on Image-To-Text Models [0.0]
We present a gray-box adversarial attack on image-to-text, both untargeted and targeted.
Our attack operates in a gray-box manner, requiring no knowledge about the decoder module.
We also show that our attacks fool the popular open-source platform Hugging Face.
arXiv Detail & Related papers (2023-06-13T07:35:28Z) - Character-Aware Models Improve Visual Text Rendering [57.19915686282047]
Current image generation models struggle to reliably produce well-formed visual text.
Character-aware models provide large gains on a novel spelling task.
Our models set a much higher state-of-the-art on visual spelling, with 30+ point accuracy gains over competitors on rare words.
arXiv Detail & Related papers (2022-12-20T18:59:23Z) - Vision-Language Pre-Training for Boosting Scene Text Detectors [57.08046351495244]
We specifically adapt vision-language joint learning for scene text detection.
We propose to learn contextualized, joint representations through vision-language pre-training.
The pre-trained model is able to produce more informative representations with richer semantics.
arXiv Detail & Related papers (2022-04-29T03:53:54Z) - Language Matters: A Weakly Supervised Pre-training Approach for Scene
Text Detection and Spotting [69.77701325270047]
This paper presents a weakly supervised pre-training method that can acquire effective scene text representations.
Our network consists of an image encoder and a character-aware text encoder that extract visual and textual features.
Experiments show that our pre-trained model improves F-score by +2.5% and +4.8% while transferring its weights to other text detection and spotting networks.
arXiv Detail & Related papers (2022-03-08T08:10:45Z) - Bad Characters: Imperceptible NLP Attacks [16.357959724298745]
A class of adversarial examples can be used to attack text-based models in a black-box setting.
We find that with a single imperceptible encoding injection an attacker can significantly reduce the performance of vulnerable models.
Our attacks work against currently-deployed commercial systems, including those produced by Microsoft and Google.
arXiv Detail & Related papers (2021-06-18T03:42:56Z) - Detecting Adversarial Examples by Input Transformations, Defense
Perturbations, and Voting [71.57324258813674]
convolutional neural networks (CNNs) have proved to reach super-human performance in visual recognition tasks.
CNNs can easily be fooled by adversarial examples, i.e., maliciously-crafted images that force the networks to predict an incorrect output.
This paper extensively explores the detection of adversarial examples via image transformations and proposes a novel methodology.
arXiv Detail & Related papers (2021-01-27T14:50:41Z) - Adversarial Watermarking Transformer: Towards Tracing Text Provenance
with Data Hiding [80.3811072650087]
We study natural language watermarking as a defense to help better mark and trace the provenance of text.
We introduce the Adversarial Watermarking Transformer (AWT) with a jointly trained encoder-decoder and adversarial training.
AWT is the first end-to-end model to hide data in text by automatically learning -- without ground truth -- word substitutions along with their locations.
arXiv Detail & Related papers (2020-09-07T11:01:24Z) - Visual Attack and Defense on Text [18.513619521807286]
Modifying characters of a piece of text to their visual similar ones often ap-pear in spam in order to fool inspection systems and other conditions.
We ap-ply a vision-based model and adversarial training to defense the attack without losing the ability to understand normal text.
arXiv Detail & Related papers (2020-08-07T15:44:58Z) - BAE: BERT-based Adversarial Examples for Text Classification [9.188318506016898]
We present BAE, a black box attack for generating adversarial examples using contextual perturbations from a BERT masked language model.
We show that BAE performs a stronger attack, in addition to generating adversarial examples with improved grammaticality and semantic coherence as compared to prior work.
arXiv Detail & Related papers (2020-04-04T16:25:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.