Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting
- URL: http://arxiv.org/abs/2104.10868v1
- Date: Thu, 22 Apr 2021 05:10:55 GMT
- Title: Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting
- Authors: Qiming Wu, Zhikang Zou, Pan Zhou, Xiaoqing Ye, Binghui Wang, Ang Li
- Abstract summary: Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
- Score: 61.99564267735242
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Crowd counting has drawn much attention due to its importance in
safety-critical surveillance systems. Especially, deep neural network (DNN)
methods have significantly reduced estimation errors for crowd counting
missions. Recent studies have demonstrated that DNNs are vulnerable to
adversarial attacks, i.e., normal images with human-imperceptible perturbations
could mislead DNNs to make false predictions. In this work, we propose a robust
attack strategy called Adversarial Patch Attack with Momentum (APAM) to
systematically evaluate the robustness of crowd counting models, where the
attacker's goal is to create an adversarial perturbation that severely degrades
their performances, thus leading to public safety accidents (e.g., stampede
accidents). Especially, the proposed attack leverages the extreme-density
background information of input images to generate robust adversarial patches
via a series of transformations (e.g., interpolation, rotation, etc.). We
observe that by perturbing less than 6\% of image pixels, our attacks severely
degrade the performance of crowd counting systems, both digitally and
physically. To better enhance the adversarial robustness of crowd counting
models, we propose the first regression model-based Randomized Ablation (RA),
which is more sufficient than Adversarial Training (ADT) (Mean Absolute Error
of RA is 5 lower than ADT on clean samples and 30 lower than ADT on adversarial
examples). Extensive experiments on five crowd counting models demonstrate the
effectiveness and generality of the proposed method. Code is available at
\url{https://github.com/harrywuhust2022/Adv-Crowd-analysis}.
Related papers
- Robust Overfitting Does Matter: Test-Time Adversarial Purification With FGSM [5.592360872268223]
Defense strategies usually train deep neural networks (DNNs) for a specific adversarial attack method and can achieve good robustness in defense against this type of adversarial attack.
However, when subjected to evaluations involving unfamiliar attack modalities, empirical evidence reveals a pronounced deterioration in the robustness of DNNs.
Most defense methods often sacrifice the accuracy of clean examples in order to improve the adversarial robustness of DNNs.
arXiv Detail & Related papers (2024-03-18T03:54:01Z) - Wasserstein distributional robustness of neural networks [9.79503506460041]
Deep neural networks are known to be vulnerable to adversarial attacks (AA)
For an image recognition task, this means that a small perturbation of the original can result in the image being misclassified.
We re-cast the problem using techniques of Wasserstein distributionally robust optimization (DRO) and obtain novel contributions.
arXiv Detail & Related papers (2023-06-16T13:41:24Z) - Consistent Valid Physically-Realizable Adversarial Attack against
Crowd-flow Prediction Models [4.286570387250455]
deep learning (DL) models can effectively learn city-wide crowd-flow patterns.
DL models have been known to perform poorly on inconspicuous adversarial perturbations.
arXiv Detail & Related papers (2023-03-05T13:30:25Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Threat Model-Agnostic Adversarial Defense using Diffusion Models [14.603209216642034]
Deep Neural Networks (DNNs) are highly sensitive to imperceptible malicious perturbations, known as adversarial attacks.
Deep Neural Networks (DNNs) are highly sensitive to imperceptible malicious perturbations, known as adversarial attacks.
arXiv Detail & Related papers (2022-07-17T06:50:48Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Perceptual Adversarial Robustness: Defense Against Unseen Threat Models [58.47179090632039]
A key challenge in adversarial robustness is the lack of a precise mathematical characterization of human perception.
Under the neural perceptual threat model, we develop novel perceptual adversarial attacks and defenses.
Because the NPTM is very broad, we find that Perceptual Adrial Training (PAT) against a perceptual attack gives robustness against many other types of adversarial attacks.
arXiv Detail & Related papers (2020-06-22T22:40:46Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.