A Perturbation Constrained Adversarial Attack for Evaluating the
Robustness of Optical Flow
- URL: http://arxiv.org/abs/2203.13214v1
- Date: Thu, 24 Mar 2022 17:10:26 GMT
- Title: A Perturbation Constrained Adversarial Attack for Evaluating the
Robustness of Optical Flow
- Authors: Jenny Schmalfuss and Philipp Scholze and Andr\'es Bruhn
- Abstract summary: Perturbation Constrained Flow Attack (PCFA) is a novel adversarial attack that emphasizes destructivity over applicability as a real-world attack.
Our experiments show that PCFA's applicability in white- and black-box settings, but also show that it finds stronger adversarial samples for optical flow than previous attacking frameworks.
We provide the first common ranking of optical flow methods in the literature considering both prediction quality and adversarial robustness, indicating that high quality methods are not necessarily robust.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent optical flow methods are almost exclusively judged in terms of
accuracy, while analyzing their robustness is often neglected. Although
adversarial attacks offer a useful tool to perform such an analysis, current
attacks on optical flow methods rather focus on real-world attacking scenarios
than on a worst case robustness assessment. Hence, in this work, we propose a
novel adversarial attack - the Perturbation Constrained Flow Attack (PCFA) -
that emphasizes destructivity over applicability as a real-world attack. More
precisely, PCFA is a global attack that optimizes adversarial perturbations to
shift the predicted flow towards a specified target flow, while keeping the L2
norm of the perturbation below a chosen bound. Our experiments not only
demonstrate PCFA's applicability in white- and black-box settings, but also
show that it finds stronger adversarial samples for optical flow than previous
attacking frameworks. Moreover, based on these strong samples, we provide the
first common ranking of optical flow methods in the literature considering both
prediction quality and adversarial robustness, indicating that high quality
methods are not necessarily robust. Our source code will be publicly available.
Related papers
- Evaluating the Robustness of LiDAR Point Cloud Tracking Against Adversarial Attack [6.101494710781259]
We introduce a unified framework for conducting adversarial attacks within the context of 3D object tracking.
In addressing black-box attack scenarios, we introduce a novel transfer-based approach, the Target-aware Perturbation Generation (TAPG) algorithm.
Our experimental findings reveal a significant vulnerability in advanced tracking methods when subjected to both black-box and white-box attacks.
arXiv Detail & Related papers (2024-10-28T10:20:38Z) - STBA: Towards Evaluating the Robustness of DNNs for Query-Limited Black-box Scenario [50.37501379058119]
We propose the Spatial Transform Black-box Attack (STBA) to craft formidable adversarial examples in the query-limited scenario.
We show that STBA could effectively improve the imperceptibility of the adversarial examples and remarkably boost the attack success rate under query-limited settings.
arXiv Detail & Related papers (2024-03-30T13:28:53Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Revisiting DeepFool: generalization and improvement [17.714671419826715]
We introduce a new family of adversarial attacks that strike a balance between effectiveness and computational efficiency.
Our proposed attacks are also suitable for evaluating the robustness of large models.
arXiv Detail & Related papers (2023-03-22T11:49:35Z) - Bridging Optimal Transport and Jacobian Regularization by Optimal
Trajectory for Enhanced Adversarial Defense [27.923344040692744]
We analyze the intricacies of adversarial training and Jacobian regularization, two pivotal defenses.
We propose our novel Optimal Transport with Jacobian regularization method, dubbed OTJR.
Our empirical evaluations set a new standard in the domain, with our method achieving commendable accuracies of 52.57% on CIFAR-10 and 28.3% on CIFAR-100 datasets.
arXiv Detail & Related papers (2023-03-21T12:22:59Z) - Rethinking Textual Adversarial Defense for Pre-trained Language Models [79.18455635071817]
A literature review shows that pre-trained language models (PrLMs) are vulnerable to adversarial attacks.
We propose a novel metric (Degree of Anomaly) to enable current adversarial attack approaches to generate more natural and imperceptible adversarial examples.
We show that our universal defense framework achieves comparable or even higher after-attack accuracy with other specific defenses.
arXiv Detail & Related papers (2022-07-21T07:51:45Z) - Consistent Semantic Attacks on Optical Flow [3.058685580689605]
We present a novel approach for semantically targeted adversarial attacks on Optical Flow.
Our method helps to hide the attackers intent in the output as well.
We demonstrate the effectiveness of our attack on subsequent tasks that depend on the optical flow.
arXiv Detail & Related papers (2021-11-16T14:05:07Z) - Balancing detectability and performance of attacks on the control
channel of Markov Decision Processes [77.66954176188426]
We investigate the problem of designing optimal stealthy poisoning attacks on the control channel of Markov decision processes (MDPs)
This research is motivated by the recent interest of the research community for adversarial and poisoning attacks applied to MDPs, and reinforcement learning (RL) methods.
arXiv Detail & Related papers (2021-09-15T09:13:10Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.