Characterizing Internal Evasion Attacks in Federated Learning
- URL: http://arxiv.org/abs/2209.08412v3
- Date: Sat, 21 Oct 2023 03:17:01 GMT
- Title: Characterizing Internal Evasion Attacks in Federated Learning
- Authors: Taejin Kim, Shubhranshu Singh, Nikhil Madaan and Carlee Joe-Wong
- Abstract summary: Federated learning allows for clients to jointly train a machine learning model.
Clients' models are vulnerable to attacks during the training and testing phases.
In this paper, we address the issue of adversarial clients performing "internal evasion attacks"
- Score: 12.873984200814533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning allows for clients in a distributed system to jointly
train a machine learning model. However, clients' models are vulnerable to
attacks during the training and testing phases. In this paper, we address the
issue of adversarial clients performing "internal evasion attacks": crafting
evasion attacks at test time to deceive other clients. For example, adversaries
may aim to deceive spam filters and recommendation systems trained with
federated learning for monetary gain. The adversarial clients have extensive
information about the victim model in a federated learning setting, as weight
information is shared amongst clients. We are the first to characterize the
transferability of such internal evasion attacks for different learning methods
and analyze the trade-off between model accuracy and robustness depending on
the degree of similarities in client data. We show that adversarial training
defenses in the federated learning setting only display limited improvements
against internal attacks. However, combining adversarial training with
personalized federated learning frameworks increases relative internal attack
robustness by 60% compared to federated adversarial training and performs well
under limited system resources.
Related papers
- Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against
Adversarial Attacks [1.689369173057502]
Federated learning has created a decentralized method to train a machine learning model without needing direct access to client data.
malicious clients are able to corrupt the global model and degrade performance across all clients within a federation.
Our novel aggregation method, FedBayes, mitigates the effect of a malicious client by calculating the probabilities of a client's model weights.
arXiv Detail & Related papers (2023-12-04T21:37:50Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - FLCert: Provably Secure Federated Learning against Poisoning Attacks [67.8846134295194]
We propose FLCert, an ensemble federated learning framework that is provably secure against poisoning attacks.
Our experiments show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients.
arXiv Detail & Related papers (2022-10-02T17:50:04Z) - Network-Level Adversaries in Federated Learning [21.222645649379672]
We study the impact of network-level adversaries on training federated learning models.
We show that attackers dropping the network traffic from carefully selected clients can significantly decrease model accuracy on a target population.
We develop a server-side defense which mitigates the impact of our attacks by identifying and up-sampling clients likely to positively contribute towards target accuracy.
arXiv Detail & Related papers (2022-08-27T02:42:04Z) - Certified Robustness in Federated Learning [54.03574895808258]
We study the interplay between federated training, personalization, and certified robustness.
We find that the simple federated averaging technique is effective in building not only more accurate, but also more certifiably-robust models.
arXiv Detail & Related papers (2022-06-06T12:10:53Z) - Ensemble Federated Adversarial Training with Non-IID data [1.5878082907673585]
Adversarial samples can confuse and cheat the client models to achieve malicious purposes.
We introduce a novel Ensemble Federated Adversarial Training Method, termed as EFAT.
Our proposed method achieves promising results compared with solely combining federated learning with adversarial approaches.
arXiv Detail & Related papers (2021-10-26T03:55:20Z) - Dynamic Defense Against Byzantine Poisoning Attacks in Federated
Learning [11.117880929232575]
Federated learning is vulnerable to Byzatine poisoning adversarial attacks.
We propose a dynamic aggregation operator that dynamically discards those adversarial clients.
The results show that the dynamic selection of the clients to aggregate enhances the performance of the global learning model.
arXiv Detail & Related papers (2020-07-29T18:02:11Z) - A Framework for Evaluating Gradient Leakage Attacks in Federated
Learning [14.134217287912008]
Federated learning (FL) is an emerging distributed machine learning framework for collaborative model training with a network of clients.
Recent studies have shown that even sharing local parameter updates from a client to the federated server may be susceptible to gradient leakage attacks.
We present a principled framework for evaluating and comparing different forms of client privacy leakage attacks.
arXiv Detail & Related papers (2020-04-22T05:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.