Evaluating Adversarial Robustness on Document Image Classification
- URL: http://arxiv.org/abs/2304.12486v2
- Date: Mon, 1 May 2023 20:49:33 GMT
- Title: Evaluating Adversarial Robustness on Document Image Classification
- Authors: Timoth\'ee Fronteau, Arnaud Paran and Aymen Shabou
- Abstract summary: We try to apply the adversarial attack philosophy on documentary and natural data and to protect models against such attacks.
We focus our work on untargeted gradient-based, transfer-based and score-based attacks and evaluate the impact of adversarial training.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial attacks and defenses have gained increasing interest on computer
vision systems in recent years, but as of today, most investigations are
limited to images. However, many artificial intelligence models actually handle
documentary data, which is very different from real world images. Hence, in
this work, we try to apply the adversarial attack philosophy on documentary and
natural data and to protect models against such attacks. We focus our work on
untargeted gradient-based, transfer-based and score-based attacks and evaluate
the impact of adversarial training, JPEG input compression and grey-scale input
transformation on the robustness of ResNet50 and EfficientNetB0 model
architectures. To the best of our knowledge, no such work has been conducted by
the community in order to study the impact of these attacks on the document
image classification task.
Related papers
- MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - AdvGen: Physical Adversarial Attack on Face Presentation Attack
Detection Systems [17.03646903905082]
Adversarial attacks have gained attraction, which try to digitally deceive the learning strategy of a recognition system.
This paper demonstrates the vulnerability of face authentication systems to adversarial images in physical world scenarios.
We propose AdvGen, an automated Generative Adversarial Network, to simulate print and replay attacks and generate adversarial images that can fool state-of-the-art PADs.
arXiv Detail & Related papers (2023-11-20T13:28:42Z) - Adversarial Attacks on Image Classification Models: FGSM and Patch
Attacks and their Impact [0.0]
This chapter introduces the concept of adversarial attacks on image classification models built on convolutional neural networks (CNN)
CNNs are very popular deep-learning models which are used in image classification tasks.
Two very well-known adversarial attacks are discussed and their impact on the performance of image classifiers is analyzed.
arXiv Detail & Related papers (2023-07-05T06:40:08Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [57.46379460600939]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Recent improvements of ASR models in the face of adversarial attacks [28.934863462633636]
Speech Recognition models are vulnerable to adversarial attacks.
We show that the relative strengths of different attack algorithms vary considerably when changing the model architecture.
We release our source code as a package that should help future research in evaluating their attacks and defenses.
arXiv Detail & Related papers (2022-03-29T22:40:37Z) - Beyond ImageNet Attack: Towards Crafting Adversarial Examples for
Black-box Domains [80.11169390071869]
Adversarial examples have posed a severe threat to deep neural networks due to their transferable nature.
We propose a Beyond ImageNet Attack (BIA) to investigate the transferability towards black-box domains.
Our methods outperform state-of-the-art approaches by up to 7.71% (towards coarse-grained domains) and 25.91% (towards fine-grained domains) on average.
arXiv Detail & Related papers (2022-01-27T14:04:27Z) - Deep Bayesian Image Set Classification: A Defence Approach against
Adversarial Attacks [32.48820298978333]
Deep neural networks (DNNs) are susceptible to be fooled with nearly high confidence by an adversary.
In practice, the vulnerability of deep learning systems against carefully perturbed images, known as adversarial examples, poses a dire security threat in the physical world applications.
We propose a robust deep Bayesian image set classification as a defence framework against a broad range of adversarial attacks.
arXiv Detail & Related papers (2021-08-23T14:52:44Z) - Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep
Image-to-Image Models against Adversarial Attacks [104.8737334237993]
We present comprehensive investigations into the vulnerability of deep image-to-image models to adversarial attacks.
For five popular image-to-image tasks, 16 deep models are analyzed from various standpoints.
We show that unlike in image classification tasks, the performance degradation on image-to-image tasks can largely differ depending on various factors.
arXiv Detail & Related papers (2021-04-30T14:20:33Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp
Adversarial Attacks [154.31827097264264]
Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms.
We propose Dual Manifold Adversarial Training (DMAT) where adversarial perturbations in both latent and image spaces are used in robustifying the model.
Our DMAT improves performance on normal images, and achieves comparable robustness to the standard adversarial training against Lp attacks.
arXiv Detail & Related papers (2020-09-05T06:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.