Security Analysis of SplitFed Learning
- URL: http://arxiv.org/abs/2212.01716v1
- Date: Sun, 4 Dec 2022 01:16:45 GMT
- Title: Security Analysis of SplitFed Learning
- Authors: Momin Ahmad Khan, Virat Shejwalkar, Amir Houmansadr, Fatima Muhammad
Anwar
- Abstract summary: Split Learning (SL) and Federated Learning (FL) are two prominent distributed collaborative learning techniques.
Recent work has explored the security vulnerabilities of FL in the form of poisoning attacks.
In this paper, we perform the first ever empirical analysis of SplitFed's robustness to strong model poisoning attacks.
- Score: 22.38766677215997
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Split Learning (SL) and Federated Learning (FL) are two prominent distributed
collaborative learning techniques that maintain data privacy by allowing
clients to never share their private data with other clients and servers, and
fined extensive IoT applications in smart healthcare, smart cities, and smart
industry. Prior work has extensively explored the security vulnerabilities of
FL in the form of poisoning attacks. To mitigate the effect of these attacks,
several defenses have also been proposed. Recently, a hybrid of both learning
techniques has emerged (commonly known as SplitFed) that capitalizes on their
advantages (fast training) and eliminates their intrinsic disadvantages
(centralized model updates). In this paper, we perform the first ever empirical
analysis of SplitFed's robustness to strong model poisoning attacks. We observe
that the model updates in SplitFed have significantly smaller dimensionality as
compared to FL that is known to have the curse of dimensionality. We show that
large models that have higher dimensionality are more susceptible to privacy
and security attacks, whereas the clients in SplitFed do not have the complete
model and have lower dimensionality, making them more robust to existing model
poisoning attacks. Our results show that the accuracy reduction due to the
model poisoning attack is 5x lower for SplitFed compared to FL.
Related papers
- Poisoning with A Pill: Circumventing Detection in Federated Learning [33.915489514978084]
This paper proposes a generic and attack-agnostic augmentation approach designed to enhance the effectiveness and stealthiness of existing FL poisoning attacks against detection in FL.
Specifically, we employ a three-stage methodology that strategically constructs, generates, and injects poison into a pill during the FL training, named as pill construction, pill poisoning, and pill injection accordingly.
arXiv Detail & Related papers (2024-07-22T05:34:47Z) - Resilience in Online Federated Learning: Mitigating Model-Poisoning Attacks via Partial Sharing [6.957420925496431]
Federated learning (FL) allows training machine learning models on distributed data without compromising privacy.
FL is vulnerable to model-poisoning attacks where malicious clients tamper with their local models to manipulate the global model.
In this work, we investigate the resilience of the partial-sharing online FL (PSO-Fed) algorithm against such attacks.
arXiv Detail & Related papers (2024-03-19T19:15:38Z) - FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive
Models [2.7539214125526534]
Federated Learning (FL) thrives in training a global model with numerous clients.
Recent research proposed poisoning attacks that cause a catastrophic loss in the accuracy of the global model.
We propose FLGuard, a novel byzantine-robust FL method that detects malicious clients and discards malicious local updates.
arXiv Detail & Related papers (2024-03-05T10:36:27Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - FL-Defender: Combating Targeted Attacks in Federated Learning [7.152674461313707]
Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers.
FL is vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model.
We propose textitFL-Defender as a method to combat FL targeted attacks.
arXiv Detail & Related papers (2022-07-02T16:04:46Z) - Deep Leakage from Model in Federated Learning [6.001369927772649]
We present two novel frameworks to demonstrate that transmitting model weights is likely to leak private local data of clients.
We also introduce two defenses to the proposed attacks and evaluate their protection effects.
arXiv Detail & Related papers (2022-06-10T05:56:00Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.