Simple Transparent Adversarial Examples
- URL: http://arxiv.org/abs/2105.09685v1
- Date: Thu, 20 May 2021 11:54:26 GMT
- Title: Simple Transparent Adversarial Examples
- Authors: Jaydeep Borkar, Pin-Yu Chen
- Abstract summary: We introduce secret embedding and transparent adversarial examples as a simpler way to evaluate robustness.
As a result, they pose a serious threat where APIs are used for high-stakes applications.
- Score: 65.65977217108659
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been a rise in the use of Machine Learning as a Service (MLaaS)
Vision APIs as they offer multiple services including pre-built models and
algorithms, which otherwise take a huge amount of resources if built from
scratch. As these APIs get deployed for high-stakes applications, it's very
important that they are robust to different manipulations. Recent works have
only focused on typical adversarial attacks when evaluating the robustness of
vision APIs. We propose two new aspects of adversarial image generation methods
and evaluate them on the robustness of Google Cloud Vision API's optical
character recognition service and object detection APIs deployed in real-world
settings such as sightengine.com, picpurify.com, Google Cloud Vision API, and
Microsoft Azure's Computer Vision API. Specifically, we go beyond the
conventional small-noise adversarial attacks and introduce secret embedding and
transparent adversarial examples as a simpler way to evaluate robustness. These
methods are so straightforward that even non-specialists can craft such
attacks. As a result, they pose a serious threat where APIs are used for
high-stakes applications. Our transparent adversarial examples successfully
evade state-of-the art object detections APIs such as Azure Cloud Vision
(attack success rate 52%) and Google Cloud Vision (attack success rate 36%).
90% of the images have a secret embedded text that successfully fools the
vision of time-limited humans but is detected by Google Cloud Vision API's
optical character recognition. Complementing to current research, our results
provide simple but unconventional methods on robustness evaluation.
Related papers
- Few-Shot API Attack Anomaly Detection in a Classification-by-Retrieval Framework [9.693391036125908]
API security needs to be more sophisticated and dynamic than ever.
We propose a novel few-shot anomaly detection framework, named FT-ANN.
Our framework enables the development of a lightweight model that can be trained with minimal examples.
arXiv Detail & Related papers (2024-05-18T10:15:31Z) - BruSLeAttack: A Query-Efficient Score-Based Black-Box Sparse Adversarial Attack [22.408968332454062]
We study the unique, less-well understood problem of generating sparse adversarial samples simply by observing the score-based replies to model queries.
We develop the BruSLeAttack-a new, faster (more query-efficient) algorithm for the problem.
Our work facilitates faster evaluation of model vulnerabilities and raises our vigilance on the safety, security and reliability of deployed systems.
arXiv Detail & Related papers (2024-04-08T08:59:26Z) - REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust
Encoder as a Service [67.0982378001551]
We show how a service provider pre-trains an encoder and then deploys it as a cloud service API.
A client queries the cloud service API to obtain feature vectors for its training/testing inputs.
We show that the cloud service only needs to provide two APIs to enable a client to certify the robustness of its downstream classifier.
arXiv Detail & Related papers (2023-01-07T17:40:11Z) - Vision Transformer with Super Token Sampling [93.70963123497327]
Vision transformer has achieved impressive performance for many vision tasks.
It may suffer from high redundancy in capturing local features for shallow layers.
Super tokens attempt to provide a semantically meaningful tessellation of visual content.
arXiv Detail & Related papers (2022-11-21T03:48:13Z) - Evaluating Transfer-based Targeted Adversarial Perturbations against
Real-World Computer Vision Systems based on Human Judgments [2.600494734548762]
Computer vision systems are remarkably vulnerable to adversarial perturbations.
In this paper, we take the first step to investigate transfer-based targeted adversarial images in a realistic scenario.
Our main contributions include an extensive human-judgment-based evaluation of attack success on the Google Cloud Vision API.
arXiv Detail & Related papers (2022-06-03T09:17:22Z) - Spotting adversarial samples for speaker verification by neural vocoders [102.1486475058963]
We adopt neural vocoders to spot adversarial samples for automatic speaker verification (ASV)
We find that the difference between the ASV scores for the original and re-synthesize audio is a good indicator for discrimination between genuine and adversarial samples.
Our codes will be made open-source for future works to do comparison.
arXiv Detail & Related papers (2021-07-01T08:58:16Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z) - Improving Query Efficiency of Black-box Adversarial Attack [75.71530208862319]
We propose a Neural Process based black-box adversarial attack (NP-Attack)
NP-Attack could greatly decrease the query counts under the black-box setting.
arXiv Detail & Related papers (2020-09-24T06:22:56Z) - Interpreting Cloud Computer Vision Pain-Points: A Mining Study of Stack
Overflow [5.975695375814528]
This study investigates developers' frustrations with computer vision services.
We find that unlike mature fields like mobile development, there is a contrast in the types of questions asked by developers.
These indicate a shallow understanding of the technology that empower such systems.
arXiv Detail & Related papers (2020-01-28T00:56:51Z) - Transferability of Adversarial Examples to Attack Cloud-based Image
Classifier Service [0.6526824510982799]
This paper focuses on studying the security of real-world cloud-based image classification services.
We propose a novel attack method, Fast Featuremap Loss PGD (FFL-PGD) attack based on Substitution model.
We demonstrate that FFL-PGD attack has a success rate over 90% among different classification services.
arXiv Detail & Related papers (2020-01-08T23:03:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.