Adversarial Attacks for Multi-view Deep Models
- URL: http://arxiv.org/abs/2006.11004v1
- Date: Fri, 19 Jun 2020 08:07:09 GMT
- Title: Adversarial Attacks for Multi-view Deep Models
- Authors: Xuli Sun, Shiliang Sun
- Abstract summary: This paper proposes two multi-view attack strategies, two-stage attack (TSA) and end-to-end attack (ETEA)
The main idea of TSA is to attack the multi-view model with adversarial examples generated by attacking the associated single-view model.
The ETEA is applied to accomplish direct attacks on the target multi-view model.
- Score: 39.07356013772198
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has highlighted the vulnerability of many deep machine learning
models to adversarial examples. It attracts increasing attention to adversarial
attacks, which can be used to evaluate the security and robustness of models
before they are deployed. However, to our best knowledge, there is no specific
research on the adversarial attacks for multi-view deep models. This paper
proposes two multi-view attack strategies, two-stage attack (TSA) and
end-to-end attack (ETEA). With the mild assumption that the single-view model
on which the target multi-view model is based is known, we first propose the
TSA strategy. The main idea of TSA is to attack the multi-view model with
adversarial examples generated by attacking the associated single-view model,
by which state-of-the-art single-view attack methods are directly extended to
the multi-view scenario. Then we further propose the ETEA strategy when the
multi-view model is provided publicly. The ETEA is applied to accomplish direct
attacks on the target multi-view model, where we develop three effective
multi-view attack methods. Finally, based on the fact that adversarial examples
generalize well among different models, this paper takes the adversarial attack
on the multi-view convolutional neural network as an example to validate that
the effectiveness of the proposed multi-view attacks. Extensive experimental
results demonstrate that our multi-view attack strategies are capable of
attacking the multi-view deep models, and we additionally find that multi-view
models are more robust than single-view models.
Related papers
- MAA: Meticulous Adversarial Attack against Vision-Language Pre-trained Models [30.04163729936878]
Meticulous Adrial Attack (MAA) fully exploit model-independent characteristics and vulnerabilities of individual samples.
MAA emphasizes fine-grained optimization of adversarial images by developing a novel resizing and sliding crop (RScrop) technique.
arXiv Detail & Related papers (2025-02-12T02:53:27Z) - MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models [74.58014281829946]
We analyze the effectiveness of several representative attacks/defenses, including model stealing attacks, membership inference attacks, and backdoor detection on public models.
Our evaluation empirically shows the performance of these attacks/defenses can vary significantly on public models compared to self-trained models.
arXiv Detail & Related papers (2023-10-19T11:49:22Z) - Attack-SAM: Towards Attacking Segment Anything Model With Adversarial
Examples [68.5719552703438]
Segment Anything Model (SAM) has attracted significant attention recently, due to its impressive performance on various downstream tasks.
Deep vision models are widely recognized as vulnerable to adversarial examples, which fool the model to make wrong predictions with imperceptible perturbation.
This work is the first of its kind to conduct a comprehensive investigation on how to attack SAM with adversarial examples.
arXiv Detail & Related papers (2023-05-01T15:08:17Z) - MultiRobustBench: Benchmarking Robustness Against Multiple Attacks [86.70417016955459]
We present the first unified framework for considering multiple attacks against machine learning (ML) models.
Our framework is able to model different levels of learner's knowledge about the test-time adversary.
We evaluate the performance of 16 defended models for robustness against a set of 9 different attack types.
arXiv Detail & Related papers (2023-02-21T20:26:39Z) - PINCH: An Adversarial Extraction Attack Framework for Deep Learning
Models [3.884583419548512]
Deep Learning (DL) models increasingly power a diversity of applications.
This paper presents PINCH: an efficient and automated extraction attack framework capable of deploying and evaluating multiple DL models and attacks across heterogeneous hardware platforms.
arXiv Detail & Related papers (2022-09-13T21:08:13Z) - Towards Adversarial Attack on Vision-Language Pre-training Models [15.882687207499373]
This paper studied the adversarial attack on popular vision-language (V+L) models and V+L tasks.
By examining the influence of different objects and attack targets, we concluded some key observations as guidance on designing strong multimodal adversarial attack.
arXiv Detail & Related papers (2022-06-19T12:55:45Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Detection Defense Against Adversarial Attacks with Saliency Map [7.736844355705379]
It is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision.
Existing defenses are trend to harden the robustness of models against adversarial attacks.
We propose a novel method combined with additional noises and utilize the inconsistency strategy to detect adversarial examples.
arXiv Detail & Related papers (2020-09-06T13:57:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.