Enhanced Security against Adversarial Examples Using a Random Ensemble
of Encrypted Vision Transformer Models
- URL: http://arxiv.org/abs/2307.13985v1
- Date: Wed, 26 Jul 2023 06:50:58 GMT
- Title: Enhanced Security against Adversarial Examples Using a Random Ensemble
of Encrypted Vision Transformer Models
- Authors: Ryota Iijima, Miki Tanaka, Sayaka Shiota, Hitoshi Kiya
- Abstract summary: Vision transformer (ViT) is more robust against the property of adversarial transferability than convolutional neural network (CNN) models.
In this article, we propose a random ensemble of encrypted ViT models to achieve much more robust models.
- Score: 12.29209267739635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are well known to be vulnerable to adversarial
examples (AEs). In addition, AEs have adversarial transferability, which means
AEs generated for a source model can fool another black-box model (target
model) with a non-trivial probability. In previous studies, it was confirmed
that the vision transformer (ViT) is more robust against the property of
adversarial transferability than convolutional neural network (CNN) models such
as ConvMixer, and moreover encrypted ViT is more robust than ViT without any
encryption. In this article, we propose a random ensemble of encrypted ViT
models to achieve much more robust models. In experiments, the proposed scheme
is verified to be more robust against not only black-box attacks but also
white-box ones than convention methods.
Related papers
- Downstream Transfer Attack: Adversarial Attacks on Downstream Models with Pre-trained Vision Transformers [95.22517830759193]
This paper studies the transferability of such an adversarial vulnerability from a pre-trained ViT model to downstream tasks.
We show that DTA achieves an average attack success rate (ASR) exceeding 90%, surpassing existing methods by a huge margin.
arXiv Detail & Related papers (2024-08-03T08:07:03Z) - A Random Ensemble of Encrypted Vision Transformers for Adversarially
Robust Defense [6.476298483207895]
Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs)
We propose a novel method using the vision transformer (ViT) that is a random ensemble of encrypted models for enhancing robustness against both white-box and black-box attacks.
In experiments, the method was demonstrated to be robust against not only white-box attacks but also black-box ones in an image classification task.
arXiv Detail & Related papers (2024-02-11T12:35:28Z) - Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models! [52.0855711767075]
EvoSeed is an evolutionary strategy-based algorithmic framework for generating photo-realistic natural adversarial samples.
We employ CMA-ES to optimize the search for an initial seed vector, which, when processed by the Conditional Diffusion Model, results in the natural adversarial sample misclassified by the Model.
Experiments show that generated adversarial images are of high image quality, raising concerns about generating harmful content bypassing safety classifiers.
arXiv Detail & Related papers (2024-02-07T09:39:29Z) - A Random Ensemble of Encrypted models for Enhancing Robustness against
Adversarial Examples [6.476298483207895]
Vision transformer (ViT) is more robust against the property of adversarial transferability than convolutional neural network (CNN) models.
In this article, we propose a random ensemble of encrypted ViT models to achieve much more robust models.
arXiv Detail & Related papers (2024-01-05T04:43:14Z) - On the Adversarial Transferability of ConvMixer Models [16.31814570942924]
We investigate the property of adversarial transferability between models including ConvMixer for the first time.
In an image classification experiment, ConvMixer is confirmed to be weak to adversarial transferability.
arXiv Detail & Related papers (2022-09-19T02:51:01Z) - On the Transferability of Adversarial Examples between Encrypted Models [20.03508926499504]
We investigate the transferability of models encrypted for adversarially robust defense for the first time.
In an image-classification experiment, the use of encrypted models is confirmed not only to be robust against AEs but to also reduce the influence of AEs.
arXiv Detail & Related papers (2022-09-07T08:50:26Z) - Self-Ensembling Vision Transformer (SEViT) for Robust Medical Image
Classification [4.843654097048771]
Vision Transformers (ViT) are competing to replace Convolutional Neural Networks (CNN) for various computer vision tasks in medical imaging.
Recent works have shown that ViTs are also susceptible to such attacks and suffer significant performance degradation under attack.
We propose a novel self-ensembling method to enhance the robustness of ViT in the presence of adversarial attacks.
arXiv Detail & Related papers (2022-08-04T19:02:24Z) - Cross-Modal Transferable Adversarial Attacks from Images to Videos [82.0745476838865]
Recent studies have shown that adversarial examples hand-crafted on one white-box model can be used to attack other black-box models.
We propose a simple yet effective cross-modal attack method, named as Image To Video (I2V) attack.
I2V generates adversarial frames by minimizing the cosine similarity between features of pre-trained image models from adversarial and benign examples.
arXiv Detail & Related papers (2021-12-10T08:19:03Z) - On Improving Adversarial Transferability of Vision Transformers [97.17154635766578]
Vision transformers (ViTs) process input images as sequences of patches via self-attention.
We study the adversarial feature space of ViT models and their transferability.
We introduce two novel strategies specific to the architecture of ViT models.
arXiv Detail & Related papers (2021-06-08T08:20:38Z) - On the Adversarial Robustness of Visual Transformers [129.29523847765952]
This work provides the first and comprehensive study on the robustness of vision transformers (ViTs) against adversarial perturbations.
Tested on various white-box and transfer attack settings, we find that ViTs possess better adversarial robustness when compared with convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-03-29T14:48:24Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.