Consistent Valid Physically-Realizable Adversarial Attack against
Crowd-flow Prediction Models
- URL: http://arxiv.org/abs/2303.02669v1
- Date: Sun, 5 Mar 2023 13:30:25 GMT
- Title: Consistent Valid Physically-Realizable Adversarial Attack against
Crowd-flow Prediction Models
- Authors: Hassan Ali, Muhammad Atif Butt, Fethi Filali, Ala Al-Fuqaha, and
Junaid Qadir
- Abstract summary: deep learning (DL) models can effectively learn city-wide crowd-flow patterns.
DL models have been known to perform poorly on inconspicuous adversarial perturbations.
- Score: 4.286570387250455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent works have shown that deep learning (DL) models can effectively learn
city-wide crowd-flow patterns, which can be used for more effective urban
planning and smart city management. However, DL models have been known to
perform poorly on inconspicuous adversarial perturbations. Although many works
have studied these adversarial perturbations in general, the adversarial
vulnerabilities of deep crowd-flow prediction models in particular have
remained largely unexplored. In this paper, we perform a rigorous analysis of
the adversarial vulnerabilities of DL-based crowd-flow prediction models under
multiple threat settings, making three-fold contributions. (1) We propose
CaV-detect by formally identifying two novel properties - Consistency and
Validity - of the crowd-flow prediction inputs that enable the detection of
standard adversarial inputs with 0% false acceptance rate (FAR). (2) We
leverage universal adversarial perturbations and an adaptive adversarial loss
to present adaptive adversarial attacks to evade CaV-detect defense. (3) We
propose CVPR, a Consistent, Valid and Physically-Realizable adversarial attack,
that explicitly inducts the consistency and validity priors in the perturbation
generation mechanism. We find out that although the crowd-flow models are
vulnerable to adversarial perturbations, it is extremely challenging to
simulate these perturbations in physical settings, notably when CaV-detect is
in place. We also show that CVPR attack considerably outperforms the adaptively
modified standard attacks in FAR and adversarial loss metrics. We conclude with
useful insights emerging from our work and highlight promising future research
directions.
Related papers
- The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - Extreme Miscalibration and the Illusion of Adversarial Robustness [66.29268991629085]
Adversarial Training is often used to increase model robustness.
We show that this observed gain in robustness is an illusion of robustness (IOR)
We urge the NLP community to incorporate test-time temperature scaling into their robustness evaluations.
arXiv Detail & Related papers (2024-02-27T13:49:12Z) - Adversarial Attacks Against Uncertainty Quantification [10.655660123083607]
This work focuses on a different adversarial scenario in which the attacker is still interested in manipulating the uncertainty estimate.
In particular, the goal is to undermine the use of machine-learning models when their outputs are consumed by a downstream module or by a human operator.
arXiv Detail & Related papers (2023-09-19T12:54:09Z) - Revisiting DeepFool: generalization and improvement [17.714671419826715]
We introduce a new family of adversarial attacks that strike a balance between effectiveness and computational efficiency.
Our proposed attacks are also suitable for evaluating the robustness of large models.
arXiv Detail & Related papers (2023-03-22T11:49:35Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Asymptotic Behavior of Adversarial Training in Binary Classification [41.7567932118769]
Adversarial training is considered to be the state-of-the-art method for defense against adversarial attacks.
Despite being successful in practice, several problems in understanding performance of adversarial training remain open.
We derive precise theoretical predictions for the minimization of adversarial training in binary classification.
arXiv Detail & Related papers (2020-10-26T01:44:20Z) - Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial
Perturbations [65.05561023880351]
Adversarial examples are malicious inputs crafted to induce misclassification.
This paper studies a complementary failure mode, invariance-based adversarial examples.
We show that defenses against sensitivity-based attacks actively harm a model's accuracy on invariance-based attacks.
arXiv Detail & Related papers (2020-02-11T18:50:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.