Consistent Semantic Attacks on Optical Flow
- URL: http://arxiv.org/abs/2111.08485v1
- Date: Tue, 16 Nov 2021 14:05:07 GMT
- Title: Consistent Semantic Attacks on Optical Flow
- Authors: Tom Koren, Lior Talker, Michael Dinerstein, Roy J Jevnisek
- Abstract summary: We present a novel approach for semantically targeted adversarial attacks on Optical Flow.
Our method helps to hide the attackers intent in the output as well.
We demonstrate the effectiveness of our attack on subsequent tasks that depend on the optical flow.
- Score: 3.058685580689605
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a novel approach for semantically targeted adversarial attacks on
Optical Flow. In such attacks the goal is to corrupt the flow predictions of a
specific object category or instance. Usually, an attacker seeks to hide the
adversarial perturbations in the input. However, a quick scan of the output
reveals the attack. In contrast, our method helps to hide the attackers intent
in the output as well. We achieve this thanks to a regularization term that
encourages off-target consistency. We perform extensive tests on leading
optical flow models to demonstrate the benefits of our approach in both
white-box and black-box settings. Also, we demonstrate the effectiveness of our
attack on subsequent tasks that depend on the optical flow.
Related papers
- Detection Defenses: An Empty Promise against Adversarial Patch Attacks
on Optical Flow [46.2482873419289]
Adrial patches undermine the reliability of optical flow predictions when placed in arbitrary scene locations.
Potential remedies are defense strategies that detect and remove adversarial patches, but their influence on the underlying motion prediction has not been investigated.
We implement defense-aware attacks to investigate whether current defenses are able to withstand attacks that take the defense mechanism into account.
arXiv Detail & Related papers (2023-10-26T13:56:12Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - A Perturbation Constrained Adversarial Attack for Evaluating the
Robustness of Optical Flow [0.0]
Perturbation Constrained Flow Attack (PCFA) is a novel adversarial attack that emphasizes destructivity over applicability as a real-world attack.
Our experiments show that PCFA's applicability in white- and black-box settings, but also show that it finds stronger adversarial samples for optical flow than previous attacking frameworks.
We provide the first common ranking of optical flow methods in the literature considering both prediction quality and adversarial robustness, indicating that high quality methods are not necessarily robust.
arXiv Detail & Related papers (2022-03-24T17:10:26Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for
Visual Object Tracking [70.14487738649373]
Adrial attack arises due to the vulnerability of deep neural networks to perceive input samples injected with imperceptible perturbations.
We propose a decision-based black-box attack method for visual object tracking.
We validate the proposed IoU attack on state-of-the-art deep trackers.
arXiv Detail & Related papers (2021-03-27T16:20:32Z) - Robust Tracking against Adversarial Attacks [69.59717023941126]
We first attempt to generate adversarial examples on top of video sequences to improve the tracking robustness against adversarial attacks.
We apply the proposed adversarial attack and defense approaches to state-of-the-art deep tracking algorithms.
arXiv Detail & Related papers (2020-07-20T08:05:55Z) - AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing
Flows [11.510009152620666]
We introduce AdvFlow: a novel black-box adversarial attack method on image classifiers.
We see that the proposed method generates adversaries that closely follow the clean data distribution, a property which makes their detection less likely.
arXiv Detail & Related papers (2020-07-15T02:13:49Z) - Black-box Adversarial Example Generation with Normalizing Flows [11.510009152620666]
We propose a novel black-box adversarial attack using normalizing flows.
We show how an adversary can be found by searching over a pre-trained flow-based model base distribution.
arXiv Detail & Related papers (2020-07-06T13:14:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.