A Random Ensemble of Encrypted models for Enhancing Robustness against
Adversarial Examples
- URL: http://arxiv.org/abs/2401.02633v1
- Date: Fri, 5 Jan 2024 04:43:14 GMT
- Title: A Random Ensemble of Encrypted models for Enhancing Robustness against
Adversarial Examples
- Authors: Ryota Iijima, Sayaka Shiota, Hitoshi Kiya
- Abstract summary: Vision transformer (ViT) is more robust against the property of adversarial transferability than convolutional neural network (CNN) models.
In this article, we propose a random ensemble of encrypted ViT models to achieve much more robust models.
- Score: 6.476298483207895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are well known to be vulnerable to adversarial
examples (AEs). In addition, AEs have adversarial transferability, which means
AEs generated for a source model can fool another black-box model (target
model) with a non-trivial probability. In previous studies, it was confirmed
that the vision transformer (ViT) is more robust against the property of
adversarial transferability than convolutional neural network (CNN) models such
as ConvMixer, and moreover encrypted ViT is more robust than ViT without any
encryption. In this article, we propose a random ensemble of encrypted ViT
models to achieve much more robust models. In experiments, the proposed scheme
is verified to be more robust against not only black-box attacks but also
white-box ones than convention methods.
Related papers
- Downstream Transfer Attack: Adversarial Attacks on Downstream Models with Pre-trained Vision Transformers [95.22517830759193]
This paper studies the transferability of such an adversarial vulnerability from a pre-trained ViT model to downstream tasks.
We show that DTA achieves an average attack success rate (ASR) exceeding 90%, surpassing existing methods by a huge margin.
arXiv Detail & Related papers (2024-08-03T08:07:03Z) - A Random Ensemble of Encrypted Vision Transformers for Adversarially
Robust Defense [6.476298483207895]
Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs)
We propose a novel method using the vision transformer (ViT) that is a random ensemble of encrypted models for enhancing robustness against both white-box and black-box attacks.
In experiments, the method was demonstrated to be robust against not only white-box attacks but also black-box ones in an image classification task.
arXiv Detail & Related papers (2024-02-11T12:35:28Z) - Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models! [52.0855711767075]
EvoSeed is an evolutionary strategy-based algorithmic framework for generating photo-realistic natural adversarial samples.
We employ CMA-ES to optimize the search for an initial seed vector, which, when processed by the Conditional Diffusion Model, results in the natural adversarial sample misclassified by the Model.
Experiments show that generated adversarial images are of high image quality, raising concerns about generating harmful content bypassing safety classifiers.
arXiv Detail & Related papers (2024-02-07T09:39:29Z) - Enhanced Security against Adversarial Examples Using a Random Ensemble
of Encrypted Vision Transformer Models [12.29209267739635]
Vision transformer (ViT) is more robust against the property of adversarial transferability than convolutional neural network (CNN) models.
In this article, we propose a random ensemble of encrypted ViT models to achieve much more robust models.
arXiv Detail & Related papers (2023-07-26T06:50:58Z) - Thermal Heating in ReRAM Crossbar Arrays: Challenges and Solutions [0.5672132510411465]
This paper studied robustness of new emerging models such as SpinalNet-based neural networks and Compact Convolutional Transformers (CCT) on image classification problem of CIFAR-10 dataset.
It was shown that despite high effectiveness of the attack on the certain individual model, this does not guarantee the transferability to other models.
arXiv Detail & Related papers (2022-12-28T05:47:19Z) - On the Adversarial Transferability of ConvMixer Models [16.31814570942924]
We investigate the property of adversarial transferability between models including ConvMixer for the first time.
In an image classification experiment, ConvMixer is confirmed to be weak to adversarial transferability.
arXiv Detail & Related papers (2022-09-19T02:51:01Z) - Robust Transferable Feature Extractors: Learning to Defend Pre-Trained
Networks Against White Box Adversaries [69.53730499849023]
We show that adversarial examples can be successfully transferred to another independently trained model to induce prediction errors.
We propose a deep learning-based pre-processing mechanism, which we refer to as a robust transferable feature extractor (RTFE)
arXiv Detail & Related papers (2022-09-14T21:09:34Z) - On the Transferability of Adversarial Examples between Encrypted Models [20.03508926499504]
We investigate the transferability of models encrypted for adversarially robust defense for the first time.
In an image-classification experiment, the use of encrypted models is confirmed not only to be robust against AEs but to also reduce the influence of AEs.
arXiv Detail & Related papers (2022-09-07T08:50:26Z) - Cross-Modal Transferable Adversarial Attacks from Images to Videos [82.0745476838865]
Recent studies have shown that adversarial examples hand-crafted on one white-box model can be used to attack other black-box models.
We propose a simple yet effective cross-modal attack method, named as Image To Video (I2V) attack.
I2V generates adversarial frames by minimizing the cosine similarity between features of pre-trained image models from adversarial and benign examples.
arXiv Detail & Related papers (2021-12-10T08:19:03Z) - On Improving Adversarial Transferability of Vision Transformers [97.17154635766578]
Vision transformers (ViTs) process input images as sequences of patches via self-attention.
We study the adversarial feature space of ViT models and their transferability.
We introduce two novel strategies specific to the architecture of ViT models.
arXiv Detail & Related papers (2021-06-08T08:20:38Z) - On the Adversarial Robustness of Visual Transformers [129.29523847765952]
This work provides the first and comprehensive study on the robustness of vision transformers (ViTs) against adversarial perturbations.
Tested on various white-box and transfer attack settings, we find that ViTs possess better adversarial robustness when compared with convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-03-29T14:48:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.