Physical Passive Patch Adversarial Attacks on Visual Odometry Systems
- URL: http://arxiv.org/abs/2207.05729v1
- Date: Mon, 11 Jul 2022 14:41:06 GMT
- Title: Physical Passive Patch Adversarial Attacks on Visual Odometry Systems
- Authors: Yaniv Nemcovsky, Matan Yaakoby, Alex M. Bronstein and Chaim Baskin
- Abstract summary: We study patch adversarial attacks on visual odometry-based autonomous navigation systems.
We show for the first time that the error margin of a visual odometry model can be significantly increased by deploying patch adversarial attacks in the scene.
- Score: 6.391337032993737
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep neural networks are known to be susceptible to adversarial perturbations
-- small perturbations that alter the output of the network and exist under
strict norm limitations. While such perturbations are usually discussed as
tailored to a specific input, a universal perturbation can be constructed to
alter the model's output on a set of inputs. Universal perturbations present a
more realistic case of adversarial attacks, as awareness of the model's exact
input is not required. In addition, the universal attack setting raises the
subject of generalization to unseen data, where given a set of inputs, the
universal perturbations aim to alter the model's output on out-of-sample data.
In this work, we study physical passive patch adversarial attacks on visual
odometry-based autonomous navigation systems. A visual odometry system aims to
infer the relative camera motion between two corresponding viewpoints, and is
frequently used by vision-based autonomous navigation systems to estimate their
state. For such navigation systems, a patch adversarial perturbation poses a
severe security issue, as it can be used to mislead a system onto some
collision course. To the best of our knowledge, we show for the first time that
the error margin of a visual odometry model can be significantly increased by
deploying patch adversarial attacks in the scene. We provide evaluation on
synthetic closed-loop drone navigation data and demonstrate that a comparable
vulnerability exists in real data. A reference implementation of the proposed
method and the reported experiments is provided at
https://github.com/patchadversarialattacks/patchadversarialattacks.
Related papers
- Fool the Hydra: Adversarial Attacks against Multi-view Object Detection
Systems [3.4673556247932225]
Adrial patches exemplify the tangible manifestation of the threat posed by adversarial attacks on Machine Learning (ML) models in real-world scenarios.
Multiview object systems are able to combine data from multiple views, and reach reliable detection results even in difficult environments.
Despite its importance in real-world vision applications, the vulnerability of multiview systems to adversarial patches is not sufficiently investigated.
arXiv Detail & Related papers (2023-11-30T20:11:44Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - How adversarial attacks can disrupt seemingly stable accurate classifiers [76.95145661711514]
Adversarial attacks dramatically change the output of an otherwise accurate learning system using a seemingly inconsequential modification to a piece of input data.
Here, we show that this may be seen as a fundamental feature of classifiers working with high dimensional input data.
We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability.
arXiv Detail & Related papers (2023-09-07T12:02:00Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - ExploreADV: Towards exploratory attack for Neural Networks [0.33302293148249124]
ExploreADV is a general and flexible adversarial attack system that is capable of modeling regional and imperceptible attacks.
We show that our system offers users good flexibility to focus on sub-regions of inputs, explore imperceptible perturbations and understand the vulnerability of pixels/regions to adversarial attacks.
arXiv Detail & Related papers (2023-01-01T07:17:03Z) - Universal Adversarial Attack on Deep Learning Based Prognostics [0.0]
We present the concept of universal adversarial perturbation, a special imperceptible noise to fool regression based RUL prediction models.
We show that addition of universal adversarial perturbation to any instance of the input data increases error in the output predicted by the model.
We further demonstrate the effect of varying the strength of perturbations on RUL prediction models and found that model accuracy decreases with the increase in perturbation strength.
arXiv Detail & Related papers (2021-09-15T08:05:16Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Anomaly Detection in Unsupervised Surveillance Setting Using Ensemble of
Multimodal Data with Adversarial Defense [0.3867363075280543]
In this paper, an ensemble detection mechanism is proposed which estimates the degree of abnormality of analyzing the real-time image and IMU (Inertial Measurement Unit) sensor data.
The proposed method performs satisfactorily on the IEEE SP Cup-2020 dataset with an accuracy of 97.8%.
arXiv Detail & Related papers (2020-07-17T20:03:02Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.