Adversarial Robustness Unhardening via Backdoor Attacks in Federated
Learning
- URL: http://arxiv.org/abs/2310.11594v2
- Date: Sat, 21 Oct 2023 03:18:35 GMT
- Title: Adversarial Robustness Unhardening via Backdoor Attacks in Federated
Learning
- Authors: Taejin Kim, Jiarui Li, Shubhranshu Singh, Nikhil Madaan, Carlee
Joe-Wong
- Abstract summary: Adversarial Robustness Unhardening (ARU) is employed by a subset of adversaries to intentionally undermine model robustness during decentralized training.
We present empirical experiments evaluating ARU's impact on adversarial training and existing robust aggregation defenses against poisoning and backdoor attacks.
- Score: 13.12397828096428
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In today's data-driven landscape, the delicate equilibrium between
safeguarding user privacy and unleashing data potential stands as a paramount
concern. Federated learning, which enables collaborative model training without
necessitating data sharing, has emerged as a privacy-centric solution. This
decentralized approach brings forth security challenges, notably poisoning and
backdoor attacks where malicious entities inject corrupted data. Our research,
initially spurred by test-time evasion attacks, investigates the intersection
of adversarial training and backdoor attacks within federated learning,
introducing Adversarial Robustness Unhardening (ARU). ARU is employed by a
subset of adversaries to intentionally undermine model robustness during
decentralized training, rendering models susceptible to a broader range of
evasion attacks. We present extensive empirical experiments evaluating ARU's
impact on adversarial training and existing robust aggregation defenses against
poisoning and backdoor attacks. Our findings inform strategies for enhancing
ARU to counter current defensive measures and highlight the limitations of
existing defenses, offering insights into bolstering defenses against ARU.
Related papers
- FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - GANcrop: A Contrastive Defense Against Backdoor Attacks in Federated Learning [1.9632700283749582]
This paper introduces a novel defense mechanism against backdoor attacks in federated learning, named GANcrop.
Experimental findings demonstrate that GANcrop effectively safeguards against backdoor attacks, particularly in non-IID scenarios.
arXiv Detail & Related papers (2024-05-31T09:33:16Z) - On the Difficulty of Defending Contrastive Learning against Backdoor
Attacks [58.824074124014224]
We show how contrastive backdoor attacks operate through distinctive mechanisms.
Our findings highlight the need for defenses tailored to the specificities of contrastive backdoor attacks.
arXiv Detail & Related papers (2023-12-14T15:54:52Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Survey on Federated Learning Threats: concepts, taxonomy on attacks and
defences, experimental study and challenges [10.177219272933781]
Federated learning is a machine learning paradigm that emerges as a solution to the privacy-preservation demands in artificial intelligence.
As machine learning, federated learning is threatened by adversarial attacks against the integrity of the learning model and the privacy of data via a distributed approach to tackle local and global learning.
arXiv Detail & Related papers (2022-01-20T12:23:03Z) - An Overview of Backdoor Attacks Against Deep Neural Networks and
Possible Defences [33.415612094924654]
The goal of this paper is to review the different types of attacks and defences proposed so far.
In a backdoor attack, the attacker corrupts the training data so to induce an erroneous behaviour at test time.
Test time errors are activated only in the presence of a triggering event corresponding to a properly crafted input sample.
arXiv Detail & Related papers (2021-11-16T13:06:31Z) - Widen The Backdoor To Let More Attackers In [24.540853975732922]
We investigate the scenario of a multi-agent backdoor attack, where multiple non-colluding attackers craft and insert triggered samples in a shared dataset.
We discover a clear backfiring phenomenon: increasing the number of attackers shrinks each attacker's attack success rate.
We then exploit this phenomenon to minimize the collective ASR of attackers and maximize defender's robustness accuracy.
arXiv Detail & Related papers (2021-10-09T13:53:57Z) - Where Did You Learn That From? Surprising Effectiveness of Membership
Inference Attacks Against Temporally Correlated Data in Deep Reinforcement
Learning [114.9857000195174]
A major challenge to widespread industrial adoption of deep reinforcement learning is the potential vulnerability to privacy breaches.
We propose an adversarial attack framework tailored for testing the vulnerability of deep reinforcement learning algorithms to membership inference attacks.
arXiv Detail & Related papers (2021-09-08T23:44:57Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.