FAWA: Fast Adversarial Watermark Attack on Optical Character Recognition
(OCR) Systems
- URL: http://arxiv.org/abs/2012.08096v1
- Date: Tue, 15 Dec 2020 05:19:54 GMT
- Title: FAWA: Fast Adversarial Watermark Attack on Optical Character Recognition
(OCR) Systems
- Authors: Lu Chen, Jiao Sun, Wei Xu
- Abstract summary: Adversarial examples generated by most existing adversarial attacks are unnatural and pollute the background severely.
We propose the Fast Adversarial Watermark Attack (FAWA) against sequence-based OCR models in the white-box manner.
By disguising the perturbations as watermarks, we can make the resulting adversarial images appear natural to human eyes and achieve a perfect attack success rate.
- Score: 16.730943103571068
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) significantly improved the accuracy of optical
character recognition (OCR) and inspired many important applications.
Unfortunately, OCRs also inherit the vulnerabilities of DNNs under adversarial
examples. Different from colorful vanilla images, text images usually have
clear backgrounds. Adversarial examples generated by most existing adversarial
attacks are unnatural and pollute the background severely. To address this
issue, we propose the Fast Adversarial Watermark Attack (FAWA) against
sequence-based OCR models in the white-box manner. By disguising the
perturbations as watermarks, we can make the resulting adversarial images
appear natural to human eyes and achieve a perfect attack success rate. FAWA
works with either gradient-based or optimization-based perturbation generation.
In both letter-level and word-level attacks, our experiments show that in
addition to natural appearance, FAWA achieves a 100% attack success rate with
60% less perturbations and 78% fewer iterations on average. In addition, we
further extend FAWA to support full-color watermarks, other languages, and even
the OCR accuracy-enhancing mechanism.
Related papers
- Robustness of Watermarking on Text-to-Image Diffusion Models [9.277492743469235]
We investigate the robustness of generative watermarking, which is created from the integration of watermarking embedding and text-to-image generation processing.
We found that generative watermarking methods are robust to direct evasion attacks, like discriminator-based attacks, or manipulation based on the edge information in edge prediction-based attacks but vulnerable to malicious fine-tuning.
arXiv Detail & Related papers (2024-08-04T13:59:09Z) - Certifiably Robust Image Watermark [57.546016845801134]
Generative AI raises many societal concerns such as boosting disinformation and propaganda campaigns.
Watermarking AI-generated content is a key technology to address these concerns.
We propose the first image watermarks with certified robustness guarantees against removal and forgery attacks.
arXiv Detail & Related papers (2024-07-04T17:56:04Z) - Robustness of AI-Image Detectors: Fundamental Limits and Practical
Attacks [47.04650443491879]
We analyze the robustness of various AI-image detectors including watermarking and deepfake detectors.
We show that watermarking methods are vulnerable to spoofing attacks where the attacker aims to have real images identified as watermarked ones.
arXiv Detail & Related papers (2023-09-29T18:30:29Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Robust Real-World Image Super-Resolution against Adversarial Attacks [115.04009271192211]
adversarial image samples with quasi-imperceptible noises could threaten deep learning SR models.
We propose a robust deep learning framework for real-world SR that randomly erases potential adversarial noises.
Our proposed method is more insensitive to adversarial attacks and presents more stable SR results than existing models and defenses.
arXiv Detail & Related papers (2022-07-31T13:26:33Z) - Preemptive Image Robustification for Protecting Users against
Man-in-the-Middle Adversarial Attacks [16.017328736786922]
A Man-in-the-Middle adversary maliciously intercepts and perturbs images web users upload online.
This type of attack can raise severe ethical concerns on top of simple performance degradation.
We devise a novel bi-level optimization algorithm that finds points in the vicinity of natural images that are robust to adversarial perturbations.
arXiv Detail & Related papers (2021-12-10T16:06:03Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal
Attack for DNN Models [72.9364216776529]
We propose a novel watermark removal attack from a different perspective.
We design a simple yet powerful transformation algorithm by combining imperceptible pattern embedding and spatial-level transformations.
Our attack can bypass state-of-the-art watermarking solutions with very high success rates.
arXiv Detail & Related papers (2020-09-18T09:14:54Z) - Attacking Optical Character Recognition (OCR) Systems with Adversarial
Watermarks [22.751944254451875]
We propose a watermark attack method to produce natural distortion that is in the disguise of watermarks and evade human eyes' detection.
Experimental results show that watermark attacks can yield a set of natural adversarial examples attached with watermarks and attain similar attack performance to the state-of-the-art methods in different attack scenarios.
arXiv Detail & Related papers (2020-02-08T05:53:21Z) - Adversarial Attacks on Convolutional Neural Networks in Facial
Recognition Domain [2.4704085162861693]
Adversarial attacks that render Deep Neural Network (DNN) classifiers vulnerable in real life represent a serious threat in autonomous vehicles, malware filters, or biometric authentication systems.
We apply Fast Gradient Sign Method to introduce perturbations to a facial image dataset and then test the output on a different classifier.
We craft a variety of different black-box attack algorithms on a facial image dataset assuming minimal adversarial knowledge.
arXiv Detail & Related papers (2020-01-30T00:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.