Attacking Optical Character Recognition (OCR) Systems with Adversarial
Watermarks
- URL: http://arxiv.org/abs/2002.03095v1
- Date: Sat, 8 Feb 2020 05:53:21 GMT
- Title: Attacking Optical Character Recognition (OCR) Systems with Adversarial
Watermarks
- Authors: Lu Chen and Wei Xu
- Abstract summary: We propose a watermark attack method to produce natural distortion that is in the disguise of watermarks and evade human eyes' detection.
Experimental results show that watermark attacks can yield a set of natural adversarial examples attached with watermarks and attain similar attack performance to the state-of-the-art methods in different attack scenarios.
- Score: 22.751944254451875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical character recognition (OCR) is widely applied in real applications
serving as a key preprocessing tool. The adoption of deep neural network (DNN)
in OCR results in the vulnerability against adversarial examples which are
crafted to mislead the output of the threat model. Different from vanilla
colorful images, images of printed text have clear backgrounds usually.
However, adversarial examples generated by most of the existing adversarial
attacks are unnatural and pollute the background severely. To address this
issue, we propose a watermark attack method to produce natural distortion that
is in the disguise of watermarks and evade human eyes' detection. Experimental
results show that watermark attacks can yield a set of natural adversarial
examples attached with watermarks and attain similar attack performance to the
state-of-the-art methods in different attack scenarios.
Related papers
- Social Media Authentication and Combating Deepfakes using Semi-fragile Invisible Image Watermarking [6.246098300155482]
We propose a semi-fragile image watermarking technique that embeds an invisible secret message into real images for media authentication.
Our proposed framework is designed to be fragile to facial manipulations or tampering while being robust to benign image-processing operations and watermark removal attacks.
arXiv Detail & Related papers (2024-10-02T18:05:03Z) - Robustness of Watermarking on Text-to-Image Diffusion Models [9.277492743469235]
We investigate the robustness of generative watermarking, which is created from the integration of watermarking embedding and text-to-image generation processing.
We found that generative watermarking methods are robust to direct evasion attacks, like discriminator-based attacks, or manipulation based on the edge information in edge prediction-based attacks but vulnerable to malicious fine-tuning.
arXiv Detail & Related papers (2024-08-04T13:59:09Z) - Certifiably Robust Image Watermark [57.546016845801134]
Generative AI raises many societal concerns such as boosting disinformation and propaganda campaigns.
Watermarking AI-generated content is a key technology to address these concerns.
We propose the first image watermarks with certified robustness guarantees against removal and forgery attacks.
arXiv Detail & Related papers (2024-07-04T17:56:04Z) - RAW: A Robust and Agile Plug-and-Play Watermark Framework for AI-Generated Images with Provable Guarantees [33.61946642460661]
This paper introduces a robust and agile watermark detection framework, dubbed as RAW.
We employ a classifier that is jointly trained with the watermark to detect the presence of the watermark.
We show that the framework provides provable guarantees regarding the false positive rate for misclassifying a watermarked image.
arXiv Detail & Related papers (2024-01-23T22:00:49Z) - Robustness of AI-Image Detectors: Fundamental Limits and Practical
Attacks [47.04650443491879]
We analyze the robustness of various AI-image detectors including watermarking and deepfake detectors.
We show that watermarking methods are vulnerable to spoofing attacks where the attacker aims to have real images identified as watermarked ones.
arXiv Detail & Related papers (2023-09-29T18:30:29Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for
Combating Deepfakes [74.18502861399591]
Malicious application of deepfakes (i.e., technologies can generate target faces or face attributes) has posed a huge threat to our society.
We propose a universal adversarial attack method on deepfake models, to generate a Cross-Model Universal Adversarial Watermark (CMUA-Watermark)
Experimental results demonstrate that the proposed CMUA-Watermark can effectively distort the fake facial images generated by deepfake models.
arXiv Detail & Related papers (2021-05-23T07:28:36Z) - FAWA: Fast Adversarial Watermark Attack on Optical Character Recognition
(OCR) Systems [16.730943103571068]
Adversarial examples generated by most existing adversarial attacks are unnatural and pollute the background severely.
We propose the Fast Adversarial Watermark Attack (FAWA) against sequence-based OCR models in the white-box manner.
By disguising the perturbations as watermarks, we can make the resulting adversarial images appear natural to human eyes and achieve a perfect attack success rate.
arXiv Detail & Related papers (2020-12-15T05:19:54Z) - Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal
Attack for DNN Models [72.9364216776529]
We propose a novel watermark removal attack from a different perspective.
We design a simple yet powerful transformation algorithm by combining imperceptible pattern embedding and spatial-level transformations.
Our attack can bypass state-of-the-art watermarking solutions with very high success rates.
arXiv Detail & Related papers (2020-09-18T09:14:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.