Harnessing Perceptual Adversarial Patches for Crowd Counting
- URL: http://arxiv.org/abs/2109.07986v1
- Date: Thu, 16 Sep 2021 13:51:39 GMT
- Title: Harnessing Perceptual Adversarial Patches for Crowd Counting
- Authors: Shunchang Liu, Jiakai Wang, Aishan Liu, Yingwei Li, Yijie Gao,
Xianglong Liu, Dacheng Tao
- Abstract summary: Crowd counting is vulnerable to adversarial examples in the physical world.
This paper proposes the Perceptual Adrial Patch (PAP) generation framework to learn the shared perceptual features between models.
- Score: 92.79051296850405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Crowd counting, which is significantly important for estimating the number of
people in safety-critical scenes, has been shown to be vulnerable to
adversarial examples in the physical world (e.g., adversarial patches). Though
harmful, adversarial examples are also valuable for assessing and better
understanding model robustness. However, existing adversarial example
generation methods in crowd counting scenarios lack strong transferability
among different black-box models. Motivated by the fact that transferability is
positively correlated to the model-invariant characteristics, this paper
proposes the Perceptual Adversarial Patch (PAP) generation framework to learn
the shared perceptual features between models by exploiting both the model
scale perception and position perception. Specifically, PAP exploits
differentiable interpolation and density attention to help learn the invariance
between models during training, leading to better transferability. In addition,
we surprisingly found that our adversarial patches could also be utilized to
benefit the performance of vanilla models for alleviating several challenges
including cross datasets and complex backgrounds. Extensive experiments under
both digital and physical world scenarios demonstrate the effectiveness of our
PAP.
Related papers
- Utilizing Adversarial Examples for Bias Mitigation and Accuracy Enhancement [3.0820287240219795]
We propose a novel approach to mitigate biases in computer vision models by utilizing counterfactual generation and fine-tuning.
Our approach leverages a curriculum learning framework combined with a fine-grained adversarial loss to fine-tune the model using adversarial examples.
We validate our approach through both qualitative and quantitative assessments, demonstrating improved bias mitigation and accuracy compared to existing methods.
arXiv Detail & Related papers (2024-04-18T00:41:32Z) - SA-Attack: Improving Adversarial Transferability of Vision-Language
Pre-training Models via Self-Augmentation [56.622250514119294]
In contrast to white-box adversarial attacks, transfer attacks are more reflective of real-world scenarios.
We propose a self-augment-based transfer attack method, termed SA-Attack.
arXiv Detail & Related papers (2023-12-08T09:08:50Z) - Leveraging Diffusion Disentangled Representations to Mitigate Shortcuts
in Underspecified Visual Tasks [92.32670915472099]
We propose an ensemble diversification framework exploiting the generation of synthetic counterfactuals using Diffusion Probabilistic Models (DPMs)
We show that diffusion-guided diversification can lead models to avert attention from shortcut cues, achieving ensemble diversity performance comparable to previous methods requiring additional data collection.
arXiv Detail & Related papers (2023-10-03T17:37:52Z) - Rethinking Model Ensemble in Transfer-based Adversarial Attacks [46.82830479910875]
An effective strategy to improve the transferability is attacking an ensemble of models.
Previous works simply average the outputs of different models.
We propose a Common Weakness Attack (CWA) to generate more transferable adversarial examples.
arXiv Detail & Related papers (2023-03-16T06:37:16Z) - In and Out-of-Domain Text Adversarial Robustness via Label Smoothing [64.66809713499576]
We study the adversarial robustness provided by various label smoothing strategies in foundational models for diverse NLP tasks.
Our experiments show that label smoothing significantly improves adversarial robustness in pre-trained models like BERT, against various popular attacks.
We also analyze the relationship between prediction confidence and robustness, showing that label smoothing reduces over-confident errors on adversarial examples.
arXiv Detail & Related papers (2022-12-20T14:06:50Z) - Fairness Increases Adversarial Vulnerability [50.90773979394264]
This paper shows the existence of a dichotomy between fairness and robustness, and analyzes when achieving fairness decreases the model robustness to adversarial samples.
Experiments on non-linear models and different architectures validate the theoretical findings in multiple vision domains.
The paper proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
arXiv Detail & Related papers (2022-11-21T19:55:35Z) - Robust Transferable Feature Extractors: Learning to Defend Pre-Trained
Networks Against White Box Adversaries [69.53730499849023]
We show that adversarial examples can be successfully transferred to another independently trained model to induce prediction errors.
We propose a deep learning-based pre-processing mechanism, which we refer to as a robust transferable feature extractor (RTFE)
arXiv Detail & Related papers (2022-09-14T21:09:34Z) - Enhancing Targeted Attack Transferability via Diversified Weight Pruning [0.3222802562733786]
Malicious attackers can generate targeted adversarial examples by imposing human-imperceptible noise on images.
With cross-model transferable adversarial examples, the vulnerability of neural networks remains even if the model information is kept secret from the attacker.
Recent studies have shown the effectiveness of ensemble-based methods in generating transferable adversarial examples.
arXiv Detail & Related papers (2022-08-18T07:25:48Z) - Feature Importance-aware Transferable Adversarial Attacks [46.12026564065764]
Existing transferable attacks tend to craft adversarial examples by indiscriminately distorting features.
We argue that such brute-force degradation would introduce model-specific local optimum into adversarial examples.
By contrast, we propose the Feature Importance-aware Attack (FIA), which disrupts important object-aware features.
arXiv Detail & Related papers (2021-07-29T17:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.