Backdoor Attacks on Crowd Counting
- URL: http://arxiv.org/abs/2207.05641v1
- Date: Tue, 12 Jul 2022 16:17:01 GMT
- Title: Backdoor Attacks on Crowd Counting
- Authors: Yuhua Sun, Tailai Zhang, Xingjun Ma, Pan Zhou, Jian Lou, Zichuan Xu,
Xing Di, Yu Cheng, and Lichao
- Abstract summary: Crowd counting is a regression task that estimates the number of people in a scene image.
In this paper, we investigate the vulnerability of deep learning based crowd counting models to backdoor attacks.
- Score: 63.90533357815404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Crowd counting is a regression task that estimates the number of people in a
scene image, which plays a vital role in a range of safety-critical
applications, such as video surveillance, traffic monitoring and flow control.
In this paper, we investigate the vulnerability of deep learning based crowd
counting models to backdoor attacks, a major security threat to deep learning.
A backdoor attack implants a backdoor trigger into a target model via data
poisoning so as to control the model's predictions at test time. Different from
image classification models on which most of existing backdoor attacks have
been developed and tested, crowd counting models are regression models that
output multi-dimensional density maps, thus requiring different techniques to
manipulate.
In this paper, we propose two novel Density Manipulation Backdoor Attacks
(DMBA$^{-}$ and DMBA$^{+}$) to attack the model to produce arbitrarily large or
small density estimations. Experimental results demonstrate the effectiveness
of our DMBA attacks on five classic crowd counting models and four types of
datasets. We also provide an in-depth analysis of the unique challenges of
backdooring crowd counting models and reveal two key elements of effective
attacks: 1) full and dense triggers and 2) manipulation of the ground truth
counts or density maps. Our work could help evaluate the vulnerability of crowd
counting models to potential backdoor attacks.
Related papers
- Exploiting the Vulnerability of Large Language Models via Defense-Aware Architectural Backdoor [0.24335447922683692]
We introduce a new type of backdoor attack that conceals itself within the underlying model architecture.
The add-on modules of model architecture layers can detect the presence of input trigger tokens and modify layer weights.
We conduct extensive experiments to evaluate our attack methods using two model architecture settings on five different large language datasets.
arXiv Detail & Related papers (2024-09-03T14:54:16Z) - Backdoor Learning on Sequence to Sequence Models [94.23904400441957]
In this paper, we study whether sequence-to-sequence (seq2seq) models are vulnerable to backdoor attacks.
Specifically, we find by only injecting 0.2% samples of the dataset, we can cause the seq2seq model to generate the designated keyword and even the whole sentence.
Extensive experiments on machine translation and text summarization have been conducted to show our proposed methods could achieve over 90% attack success rate on multiple datasets and models.
arXiv Detail & Related papers (2023-05-03T20:31:13Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.