Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor
Attacks in Federated Learning
- URL: http://arxiv.org/abs/2210.09305v1
- Date: Mon, 17 Oct 2022 17:59:38 GMT
- Title: Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor
Attacks in Federated Learning
- Authors: Yuxin Wen, Jonas Geiping, Liam Fowl, Hossein Souri, Rama Chellappa,
Micah Goldblum, Tom Goldstein
- Abstract summary: We propose an attack that anticipates and accounts for the entire federated learning pipeline, including behaviors of other clients.
We show that this new attack is effective in realistic scenarios where the attacker only contributes to a small fraction of randomly sampled rounds.
- Score: 102.05872020792603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is particularly susceptible to model poisoning and
backdoor attacks because individual users have direct control over the training
data and model updates. At the same time, the attack power of an individual
user is limited because their updates are quickly drowned out by those of many
other users. Existing attacks do not account for future behaviors of other
users, and thus require many sequential updates and their effects are quickly
erased. We propose an attack that anticipates and accounts for the entire
federated learning pipeline, including behaviors of other clients, and ensures
that backdoors are effective quickly and persist even after multiple rounds of
community updates. We show that this new attack is effective in realistic
scenarios where the attacker only contributes to a small fraction of randomly
sampled rounds and demonstrate this attack on image classification, next-word
prediction, and sentiment analysis.
Related papers
- Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks [11.390175856652856]
Clean-label attacks are a more stealthy form of backdoor attacks that can perform the attack without changing the labels of poisoned data.
We study different strategies for selectively poisoning a small set of training samples in the target class to boost the attack success rate.
Our threat model poses a serious threat in training machine learning models with third-party datasets.
arXiv Detail & Related papers (2024-07-15T15:38:21Z) - SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - Rethinking Backdoor Attacks [122.1008188058615]
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation.
Defending against such attacks typically involves viewing these inserted examples as outliers in the training set and using techniques from robust statistics to detect and remove them.
We show that without structural information about the training data distribution, backdoor attacks are indistinguishable from naturally-occurring features in the data.
arXiv Detail & Related papers (2023-07-19T17:44:54Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - TESSERACT: Gradient Flip Score to Secure Federated Learning Against
Model Poisoning Attacks [25.549815759093068]
Federated learning is vulnerable to model poisoning attacks.
This is because malicious clients can collude to make the global model inaccurate.
We develop TESSERACT, a defense against this directed deviation attack.
arXiv Detail & Related papers (2021-10-19T17:03:29Z) - Widen The Backdoor To Let More Attackers In [24.540853975732922]
We investigate the scenario of a multi-agent backdoor attack, where multiple non-colluding attackers craft and insert triggered samples in a shared dataset.
We discover a clear backfiring phenomenon: increasing the number of attackers shrinks each attacker's attack success rate.
We then exploit this phenomenon to minimize the collective ASR of attackers and maximize defender's robustness accuracy.
arXiv Detail & Related papers (2021-10-09T13:53:57Z) - What Doesn't Kill You Makes You Robust(er): Adversarial Training against
Poisons and Backdoors [57.040948169155925]
We extend the adversarial training framework to defend against (training-time) poisoning and backdoor attacks.
Our method desensitizes networks to the effects of poisoning by creating poisons during training and injecting them into training batches.
We show that this defense withstands adaptive attacks, generalizes to diverse threat models, and incurs a better performance trade-off than previous defenses.
arXiv Detail & Related papers (2021-02-26T17:54:36Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.