Detection Defenses: An Empty Promise against Adversarial Patch Attacks
on Optical Flow
- URL: http://arxiv.org/abs/2310.17403v2
- Date: Thu, 2 Nov 2023 08:28:29 GMT
- Title: Detection Defenses: An Empty Promise against Adversarial Patch Attacks
on Optical Flow
- Authors: Erik Scheurer, Jenny Schmalfuss, Alexander Lis and Andr\'es Bruhn
- Abstract summary: Adrial patches undermine the reliability of optical flow predictions when placed in arbitrary scene locations.
Potential remedies are defense strategies that detect and remove adversarial patches, but their influence on the underlying motion prediction has not been investigated.
We implement defense-aware attacks to investigate whether current defenses are able to withstand attacks that take the defense mechanism into account.
- Score: 46.2482873419289
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial patches undermine the reliability of optical flow predictions
when placed in arbitrary scene locations. Therefore, they pose a realistic
threat to real-world motion detection and its downstream applications.
Potential remedies are defense strategies that detect and remove adversarial
patches, but their influence on the underlying motion prediction has not been
investigated. In this paper, we thoroughly examine the currently available
detect-and-remove defenses ILP and LGS for a wide selection of state-of-the-art
optical flow methods, and illuminate their side effects on the quality and
robustness of the final flow predictions. In particular, we implement
defense-aware attacks to investigate whether current defenses are able to
withstand attacks that take the defense mechanism into account. Our experiments
yield two surprising results: Detect-and-remove defenses do not only lower the
optical flow quality on benign scenes, in doing so, they also harm the
robustness under patch attacks for all tested optical flow methods except
FlowNetC. As currently employed detect-and-remove defenses fail to deliver the
promised adversarial robustness for optical flow, they evoke a false sense of
security. The code is available at
https://github.com/cv-stuttgart/DetectionDefenses.
Related papers
- Model Agnostic Defense against Adversarial Patch Attacks on Object Detection in Unmanned Aerial Vehicles [0.27309692684728615]
Object detection forms a key component in Unmanned Aerial Vehicles (UAVs)
adversarial patch attacks on an onboard object detector can severely impair the performance of upstream tasks.
This paper proposes a novel model-agnostic defense mechanism against the threat of adversarial patch attacks.
arXiv Detail & Related papers (2024-05-29T15:19:07Z) - Kick Bad Guys Out! Conditionally Activated Anomaly Detection in Federated Learning with Zero-Knowledge Proof Verification [22.078088272837068]
Federated Learning (FL) systems are susceptible to adversarial attacks.
Current defense methods are often impractical for real-world FL systems.
We propose a novel anomaly detection strategy that is designed for real-world FL systems.
arXiv Detail & Related papers (2023-10-06T07:09:05Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Defending Against Person Hiding Adversarial Patch Attack with a
Universal White Frame [28.128458352103543]
High-performance object detection networks are vulnerable to adversarial patch attacks.
Person-hiding attacks are emerging as a serious problem in many safety-critical applications.
We propose a novel defense strategy that mitigates a person-hiding attack by optimizing defense patterns.
arXiv Detail & Related papers (2022-04-27T15:18:08Z) - Consistent Semantic Attacks on Optical Flow [3.058685580689605]
We present a novel approach for semantically targeted adversarial attacks on Optical Flow.
Our method helps to hide the attackers intent in the output as well.
We demonstrate the effectiveness of our attack on subsequent tasks that depend on the optical flow.
arXiv Detail & Related papers (2021-11-16T14:05:07Z) - We Can Always Catch You: Detecting Adversarial Patched Objects WITH or
WITHOUT Signature [3.5272597442284104]
In this paper, we explore the detection problems about the adversarial patch attacks to the object detection.
A fast signature-based defense method is proposed and demonstrated to be effective.
The newly generated adversarial patches can successfully evade the proposed signature-based defense.
We present a novel signature-independent detection method based on the internal content semantics consistency.
arXiv Detail & Related papers (2021-06-09T17:58:08Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - Robust Tracking against Adversarial Attacks [69.59717023941126]
We first attempt to generate adversarial examples on top of video sequences to improve the tracking robustness against adversarial attacks.
We apply the proposed adversarial attack and defense approaches to state-of-the-art deep tracking algorithms.
arXiv Detail & Related papers (2020-07-20T08:05:55Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.