REVAMP: Automated Simulations of Adversarial Attacks on Arbitrary
Objects in Realistic Scenes
- URL: http://arxiv.org/abs/2310.12243v1
- Date: Wed, 18 Oct 2023 18:28:44 GMT
- Title: REVAMP: Automated Simulations of Adversarial Attacks on Arbitrary
Objects in Realistic Scenes
- Authors: Matthew Hull, Zijie J. Wang, and Duen Horng Chau
- Abstract summary: Deep Learning models are vulnerable to adversarial attacks where an attacker could place an adversarial object in the environment, leading to mis-classification.
We introduce REVAMP, an easy-to-use Python library that is the first-of-its-kind tool for creating attack scenarios with arbitrary objects.
We will demonstrate and invite the audience to try REVAMP to produce an adversarial texture on a chosen object while having control over various scene parameters.
- Score: 33.138002744768634
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Learning models, such as those used in an autonomous vehicle are
vulnerable to adversarial attacks where an attacker could place an adversarial
object in the environment, leading to mis-classification. Generating these
adversarial objects in the digital space has been extensively studied, however
successfully transferring these attacks from the digital realm to the physical
realm has proven challenging when controlling for real-world environmental
factors. In response to these limitations, we introduce REVAMP, an easy-to-use
Python library that is the first-of-its-kind tool for creating attack scenarios
with arbitrary objects and simulating realistic environmental factors,
lighting, reflection, and refraction. REVAMP enables researchers and
practitioners to swiftly explore various scenarios within the digital realm by
offering a wide range of configurable options for designing experiments and
using differentiable rendering to reproduce physically plausible adversarial
objects. We will demonstrate and invite the audience to try REVAMP to produce
an adversarial texture on a chosen object while having control over various
scene parameters. The audience will choose a scene, an object to attack, the
desired attack class, and the number of camera positions to use. Then, in real
time, we show how this altered texture causes the chosen object to be
mis-classified, showcasing the potential of REVAMP in real-world scenarios.
REVAMP is open-source and available at https://github.com/poloclub/revamp.
Related papers
- FetchBot: Object Fetching in Cluttered Shelves via Zero-Shot Sim2Real [22.899593664306717]
FetchBot is a framework designed to enable zero-shot generalizable and safety-aware object fetching from cluttered shelves in real-world settings.
To address data scarcity, we propose an efficient voxel-based method for generating diverse simulated cluttered shelf scenes.
To tackle the challenge of limited views, we design a novel architecture for learning multi-view representations.
arXiv Detail & Related papers (2025-02-25T06:32:42Z) - Why Don't You Clean Your Glasses? Perception Attacks with Dynamic
Optical Perturbations [17.761200546223442]
Adapting adversarial attacks to the physical world is desirable for the attacker, as this removes the need to compromise digital systems.
We present EvilEye, a man-in-the-middle perception attack that leverages transparent displays to generate dynamic physical adversarial examples.
arXiv Detail & Related papers (2023-07-24T21:16:38Z) - CrowdSim2: an Open Synthetic Benchmark for Object Detectors [0.7223361655030193]
This paper presents and publicly releases CrowdSim2, a new synthetic collection of images suitable for people and vehicle detection.
It consists of thousands of images gathered from various synthetic scenarios resembling the real world, where we varied some factors of interest.
We exploited this new benchmark as a testing ground for some state-of-the-art detectors, showing that our simulated scenarios can be a valuable tool for measuring their performances in a controlled environment.
arXiv Detail & Related papers (2023-04-11T09:35:57Z) - Leveraging Local Patch Differences in Multi-Object Scenes for Generative
Adversarial Attacks [48.66027897216473]
We tackle a more practical problem of generating adversarial perturbations using multi-object (i.e., multiple dominant objects) images.
We propose a novel generative attack (called Local Patch Difference or LPD-Attack) where a novel contrastive loss function uses the aforesaid local differences in feature space of multi-object scenes.
Our approach outperforms baseline generative attacks with highly transferable perturbations when evaluated under different white-box and black-box settings.
arXiv Detail & Related papers (2022-09-20T17:36:32Z) - GAMA: Generative Adversarial Multi-Object Scene Attacks [48.33120361498787]
This paper presents the first approach of using generative models for adversarial attacks on multi-object scenes.
We call this attack approach Generative Adversarial Multi-object scene Attacks (GAMA)
arXiv Detail & Related papers (2022-09-20T06:40:54Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - DPA: Learning Robust Physical Adversarial Camouflages for Object
Detectors [5.598600329573922]
We propose the Dense Proposals Attack (DPA) to learn robust, physical and targeted adversarial camouflages for detectors.
The camouflages are robust because they remain adversarial when filmed under arbitrary viewpoint and different illumination conditions.
We build a virtual 3D scene using the Unity simulation engine to fairly and reproducibly evaluate different physical attacks.
arXiv Detail & Related papers (2021-09-01T00:18:17Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - CCA: Exploring the Possibility of Contextual Camouflage Attack on Object
Detection [16.384831731988204]
We propose a contextual camouflage attack (CCA) algorithm to in-fluence the performance of object detectors.
In this paper, we usean evolutionary search strategy and adversarial machine learningin interactions with a photo-realistic simulated environment.
Theproposed camouflages are validated effective to most of the state-of-the-art object detectors.
arXiv Detail & Related papers (2020-08-19T06:16:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.