What Causes Optical Flow Networks to be Vulnerable to Physical
Adversarial Attacks
- URL: http://arxiv.org/abs/2103.16255v1
- Date: Tue, 30 Mar 2021 11:12:46 GMT
- Title: What Causes Optical Flow Networks to be Vulnerable to Physical
Adversarial Attacks
- Authors: Simon Schrodi, Tonmoy Saikia, Thomas Brox
- Abstract summary: Recent work demonstrated the lack of robustness of optical flow networks to physical, patch-based adversarial attacks.
We show that the lack of robustness is rooted in the classical aperture problem of optical flow estimation.
We show how these mistakes can be rectified in order to make optical flow networks robust to physical, patch-based attacks.
- Score: 45.55988088321407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work demonstrated the lack of robustness of optical flow networks to
physical, patch-based adversarial attacks. The possibility to physically attack
a basic component of automotive systems is a reason for serious concerns. In
this paper, we analyze the cause of the problem and show that the lack of
robustness is rooted in the classical aperture problem of optical flow
estimation in combination with bad choices in the details of the network
architecture. We show how these mistakes can be rectified in order to make
optical flow networks robust to physical, patch-based attacks.
Related papers
- Attack on Scene Flow using Point Clouds [9.115508086522887]
This paper introduces adversarial white-box attacks specifically tailored for scene flow networks.
Experimental results show that the generated adversarial examples obtain up to 33.7 relative degradation in average end-point error.
The study also reveals the significant impact that attacks targeting point clouds in only one dimension or color channel have on average end-point error.
arXiv Detail & Related papers (2024-04-21T11:21:27Z) - Quantum-Inspired Analysis of Neural Network Vulnerabilities: The Role of
Conjugate Variables in System Attacks [54.565579874913816]
Neural networks demonstrate inherent vulnerability to small, non-random perturbations, emerging as adversarial attacks.
A mathematical congruence manifests between this mechanism and the quantum physics' uncertainty principle, casting light on a hitherto unanticipated interdisciplinarity.
arXiv Detail & Related papers (2024-02-16T02:11:27Z) - Localizing Anomalies in Critical Infrastructure using Model-Based Drift
Explanations [5.319765271848658]
We analyze the effects of anomalies on the dynamics of critical infrastructure systems by modeling the networks employing Bayesian networks.
In particular, we argue that model-based explanations of concept drift are a promising tool for localizing anomalies.
To showcase that our methodology applies to critical infrastructure more generally, we showcase the suitability of the derived technique to localize sensor faults in power systems.
arXiv Detail & Related papers (2023-10-24T13:33:19Z) - Self-Healing Robust Neural Networks via Closed-Loop Control [23.360913637445964]
A typical self-healing mechanism is the immune system of a human body.
This paper considers the post-training self-healing of a neural network.
We propose a closed-loop control formulation to automatically detect and fix the errors caused by various attacks or perturbations.
arXiv Detail & Related papers (2022-06-26T20:25:35Z) - Leaky Nets: Recovering Embedded Neural Network Models and Inputs through
Simple Power and Timing Side-Channels -- Attacks and Defenses [4.014351341279427]
We study the side-channel vulnerabilities of embedded neural network implementations by recovering their parameters.
We demonstrate our attacks on popular micro-controller platforms over networks of different precisions.
Countermeasures against timing-based attacks are implemented and their overheads are analyzed.
arXiv Detail & Related papers (2021-03-26T21:28:13Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Problems of representation of electrocardiograms in convolutional neural
networks [58.720142291102135]
We show that these problems are systemic in nature.
They are due to how convolutional networks work with composite objects, parts of which are not fixed rigidly, but have significant mobility.
arXiv Detail & Related papers (2020-12-01T14:02:06Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.