Evaluating Transfer-based Targeted Adversarial Perturbations against
Real-World Computer Vision Systems based on Human Judgments
- URL: http://arxiv.org/abs/2206.01467v1
- Date: Fri, 3 Jun 2022 09:17:22 GMT
- Title: Evaluating Transfer-based Targeted Adversarial Perturbations against
Real-World Computer Vision Systems based on Human Judgments
- Authors: Zhengyu Zhao and Nga Dang and Martha Larson
- Abstract summary: Computer vision systems are remarkably vulnerable to adversarial perturbations.
In this paper, we take the first step to investigate transfer-based targeted adversarial images in a realistic scenario.
Our main contributions include an extensive human-judgment-based evaluation of attack success on the Google Cloud Vision API.
- Score: 2.600494734548762
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Computer vision systems are remarkably vulnerable to adversarial
perturbations. Transfer-based adversarial images are generated on one (source)
system and used to attack another (target) system. In this paper, we take the
first step to investigate transfer-based targeted adversarial images in a
realistic scenario where the target system is trained on some private data with
its inventory of semantic labels not publicly available. Our main contributions
include an extensive human-judgment-based evaluation of attack success on the
Google Cloud Vision API and additional analysis of the different behaviors of
Google Cloud Vision in face of original images vs. adversarial images.
Resources are publicly available at
\url{https://github.com/ZhengyuZhao/Targeted-Tansfer/blob/main/google_results.zip}.
Related papers
- Replace-then-Perturb: Targeted Adversarial Attacks With Visual Reasoning for Vision-Language Models [6.649753747542211]
We propose a novel adversarial attack procedure, namely, Replace-then-Perturb and Contrastive-Adv.
In Replace-then-Perturb, we first leverage a text-guided segmentation model to find the target object in the image.
By doing this, we can generate a target image corresponding to the desired prompt, while maintaining the overall integrity of the original image.
arXiv Detail & Related papers (2024-11-01T04:50:08Z) - AdvGen: Physical Adversarial Attack on Face Presentation Attack
Detection Systems [17.03646903905082]
Adversarial attacks have gained attraction, which try to digitally deceive the learning strategy of a recognition system.
This paper demonstrates the vulnerability of face authentication systems to adversarial images in physical world scenarios.
We propose AdvGen, an automated Generative Adversarial Network, to simulate print and replay attacks and generate adversarial images that can fool state-of-the-art PADs.
arXiv Detail & Related papers (2023-11-20T13:28:42Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [57.46379460600939]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Deep Bayesian Image Set Classification: A Defence Approach against
Adversarial Attacks [32.48820298978333]
Deep neural networks (DNNs) are susceptible to be fooled with nearly high confidence by an adversary.
In practice, the vulnerability of deep learning systems against carefully perturbed images, known as adversarial examples, poses a dire security threat in the physical world applications.
We propose a robust deep Bayesian image set classification as a defence framework against a broad range of adversarial attacks.
arXiv Detail & Related papers (2021-08-23T14:52:44Z) - Simple Transparent Adversarial Examples [65.65977217108659]
We introduce secret embedding and transparent adversarial examples as a simpler way to evaluate robustness.
As a result, they pose a serious threat where APIs are used for high-stakes applications.
arXiv Detail & Related papers (2021-05-20T11:54:26Z) - QAIR: Practical Query-efficient Black-Box Attacks for Image Retrieval [56.51916317628536]
We study the query-based attack against image retrieval to evaluate its robustness against adversarial examples under the black-box setting.
A new relevance-based loss is designed to quantify the attack effects by measuring the set similarity on the top-k retrieval results before and after attacks.
Experiments show that the proposed attack achieves a high attack success rate with few queries against the image retrieval systems under the black-box setting.
arXiv Detail & Related papers (2021-03-04T10:18:43Z) - Defense-friendly Images in Adversarial Attacks: Dataset and Metrics for
Perturbation Difficulty [28.79528737626505]
A dataset bias is a problem in adversarial machine learning, especially in the evaluation of defenses.
In this paper, we report for the first time, a class of robust images that are both resilient to attacks and that recover better than random images under adversarial attacks.
We propose three metrics to determine the proportion of robust images in a dataset and provide scoring to determine the dataset bias.
arXiv Detail & Related papers (2020-11-05T06:21:24Z) - Understanding Adversarial Examples from the Mutual Influence of Images
and Perturbations [83.60161052867534]
We analyze adversarial examples by disentangling the clean images and adversarial perturbations, and analyze their influence on each other.
Our results suggest a new perspective towards the relationship between images and universal perturbations.
We are the first to achieve the challenging task of a targeted universal attack without utilizing original training data.
arXiv Detail & Related papers (2020-07-13T05:00:09Z) - Self-Supervised Viewpoint Learning From Image Collections [116.56304441362994]
We propose a novel learning framework which incorporates an analysis-by-synthesis paradigm to reconstruct images in a viewpoint aware manner.
We show that our approach performs competitively to fully-supervised approaches for several object categories like human faces, cars, buses, and trains.
arXiv Detail & Related papers (2020-04-03T22:01:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.