Sharpness-Aware Data Poisoning Attack
- URL: http://arxiv.org/abs/2305.14851v2
- Date: Tue, 7 May 2024 04:41:52 GMT
- Title: Sharpness-Aware Data Poisoning Attack
- Authors: Pengfei He, Han Xu, Jie Ren, Yingqian Cui, Hui Liu, Charu C. Aggarwal, Jiliang Tang,
- Abstract summary: Recent research has highlighted the vulnerability of Deep Neural Networks (DNNs) against data poisoning attacks.
We propose a novel attack method called ''Sharpness-Aware Data Poisoning Attack (SAPA)''
In particular, it leverages the concept of DNNs' loss landscape sharpness to optimize the poisoning effect on the worst re-trained model.
- Score: 38.01535347191942
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent research has highlighted the vulnerability of Deep Neural Networks (DNNs) against data poisoning attacks. These attacks aim to inject poisoning samples into the models' training dataset such that the trained models have inference failures. While previous studies have executed different types of attacks, one major challenge that greatly limits their effectiveness is the uncertainty of the re-training process after the injection of poisoning samples, including the re-training initialization or algorithms. To address this challenge, we propose a novel attack method called ''Sharpness-Aware Data Poisoning Attack (SAPA)''. In particular, it leverages the concept of DNNs' loss landscape sharpness to optimize the poisoning effect on the worst re-trained model. It helps enhance the preservation of the poisoning effect, regardless of the specific retraining procedure employed. Extensive experiments demonstrate that SAPA offers a general and principled strategy that significantly enhances various types of poisoning attacks.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.