Invisible Perturbations: Physical Adversarial Examples Exploiting the
Rolling Shutter Effect
- URL: http://arxiv.org/abs/2011.13375v3
- Date: Sun, 18 Apr 2021 16:21:42 GMT
- Title: Invisible Perturbations: Physical Adversarial Examples Exploiting the
Rolling Shutter Effect
- Authors: Athena Sayles, Ashish Hooda, Mohit Gupta, Rahul Chatterjee, Earlence
Fernandes
- Abstract summary: We generate, for the first time, physical adversarial examples that are invisible to human eyes.
We demonstrate how an attacker can craft a modulated light signal that adversarially illuminates a scene and causes targeted misclassifications.
We conduct a range of simulation and physical experiments with LEDs, demonstrating targeted attack rates up to 84%.
- Score: 16.876798038844445
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Physical adversarial examples for camera-based computer vision have so far
been achieved through visible artifacts -- a sticker on a Stop sign, colorful
borders around eyeglasses or a 3D printed object with a colorful texture. An
implicit assumption here is that the perturbations must be visible so that a
camera can sense them. By contrast, we contribute a procedure to generate, for
the first time, physical adversarial examples that are invisible to human eyes.
Rather than modifying the victim object with visible artifacts, we modify light
that illuminates the object. We demonstrate how an attacker can craft a
modulated light signal that adversarially illuminates a scene and causes
targeted misclassifications on a state-of-the-art ImageNet deep learning model.
Concretely, we exploit the radiometric rolling shutter effect in commodity
cameras to create precise striping patterns that appear on images. To human
eyes, it appears like the object is illuminated, but the camera creates an
image with stripes that will cause ML models to output the attacker-desired
classification. We conduct a range of simulation and physical experiments with
LEDs, demonstrating targeted attack rates up to 84%.
Related papers
- Transparency Attacks: How Imperceptible Image Layers Can Fool AI
Perception [0.0]
This paper investigates a novel algorithmic vulnerability when imperceptible image layers confound vision models into arbitrary label assignments and captions.
We explore image preprocessing methods to introduce stealth transparency, which triggers AI misinterpretation of what the human eye perceives.
The stealth transparency confounds established vision systems, including evading facial recognition and surveillance, digital watermarking, content filtering, dataset curating, automotive and drone autonomy, forensic evidence tampering, and retail product misclassifying.
arXiv Detail & Related papers (2024-01-29T00:52:01Z) - Relightable Neural Actor with Intrinsic Decomposition and Pose Control [80.06094206522668]
We propose Relightable Neural Actor, a new video-based method for learning a pose-driven neural human model that can be relighted.
For training, our method solely requires a multi-view recording of the human under a known, but static lighting condition.
To evaluate our approach in real-world scenarios, we collect a new dataset with four identities recorded under different light conditions, indoors and outdoors.
arXiv Detail & Related papers (2023-12-18T14:30:13Z) - Why Don't You Clean Your Glasses? Perception Attacks with Dynamic
Optical Perturbations [17.761200546223442]
Adapting adversarial attacks to the physical world is desirable for the attacker, as this removes the need to compromise digital systems.
We present EvilEye, a man-in-the-middle perception attack that leverages transparent displays to generate dynamic physical adversarial examples.
arXiv Detail & Related papers (2023-07-24T21:16:38Z) - AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision
Systems [5.476763798688862]
"printed adversarial attacks", known as physical adversarial attacks, can successfully mislead perception models.
We propose a camera-based adversarial attack capable of fooling camera-based perception systems over all objects of the same class.
We achieve a drop in average model accuracy of more than $45%$ and $40%$ on VGG19 for ImageNet and Resnet34 for Caltech.
arXiv Detail & Related papers (2023-03-02T15:14:46Z) - Totems: Physical Objects for Verifying Visual Integrity [68.55682676677046]
We introduce a new approach to image forensics: placing physical refractive objects, which we call totems, into a scene so as to protect any photograph taken of that scene.
Totems bend and redirect light rays, thus providing multiple, albeit distorted, views of the scene within a single image.
A defender can use these distorted totem pixels to detect if an image has been manipulated.
arXiv Detail & Related papers (2022-09-26T21:19:37Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Attack to Fool and Explain Deep Networks [59.97135687719244]
We counter-argue by providing evidence of human-meaningful patterns in adversarial perturbations.
Our major contribution is a novel pragmatic adversarial attack that is subsequently transformed into a tool to interpret the visual models.
arXiv Detail & Related papers (2021-06-20T03:07:36Z) - They See Me Rollin': Inherent Vulnerability of the Rolling Shutter in
CMOS Image Sensors [21.5487020124302]
A camera's electronic rolling shutter can be exploited to inject fine-grained image disruptions.
We show how an adversary can modulate the laser to hide up to 75% of objects perceived by state-of-the-art detectors.
Our results indicate that rolling shutter attacks can substantially reduce the performance and reliability of vision-based intelligent systems.
arXiv Detail & Related papers (2021-01-25T11:14:25Z) - Relighting Images in the Wild with a Self-Supervised Siamese
Auto-Encoder [62.580345486483886]
We propose a self-supervised method for image relighting of single view images in the wild.
The method is based on an auto-encoder which deconstructs an image into two separate encodings.
We train our model on large-scale datasets such as Youtube 8M and CelebA.
arXiv Detail & Related papers (2020-12-11T16:08:50Z) - Face Forgery Detection by 3D Decomposition [72.22610063489248]
We consider a face image as the production of the intervention of the underlying 3D geometry and the lighting environment.
By disentangling the face image into 3D shape, common texture, identity texture, ambient light, and direct light, we find the devil lies in the direct light and the identity texture.
We propose to utilize facial detail, which is the combination of direct light and identity texture, as the clue to detect the subtle forgery patterns.
arXiv Detail & Related papers (2020-11-19T09:25:44Z) - GhostImage: Remote Perception Attacks against Camera-based Image
Classification Systems [6.637193297008101]
In vision-based object classification systems imaging sensors perceive the environment and machine learning is then used to detect and classify objects for decision-making purposes.
We demonstrate how the perception domain can be remotely and unobtrusively exploited to enable an attacker to create spurious objects or alter an existing object.
arXiv Detail & Related papers (2020-01-21T21:58:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.