MISA: Unveiling the Vulnerabilities in Split Federated Learning
- URL: http://arxiv.org/abs/2312.11026v2
- Date: Tue, 19 Dec 2023 06:32:32 GMT
- Title: MISA: Unveiling the Vulnerabilities in Split Federated Learning
- Authors: Wei Wan, Yuxuan Ning, Shengshan Hu, Lulu Xue, Minghui Li, Leo Yu
Zhang, and Hai Jin
- Abstract summary: textitFederated learning (FL) and textitsplit learning (SL) are prevailing distributed paradigms in recent years.
We present a novel poisoning attack known as MISA. It poisons both the top and bottom models, causing a drastic accuracy collapse.
- Score: 22.83568634599664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: \textit{Federated learning} (FL) and \textit{split learning} (SL) are
prevailing distributed paradigms in recent years. They both enable shared
global model training while keeping data localized on users' devices. The
former excels in parallel execution capabilities, while the latter enjoys low
dependence on edge computing resources and strong privacy protection.
\textit{Split federated learning} (SFL) combines the strengths of both FL and
SL, making it one of the most popular distributed architectures. Furthermore, a
recent study has claimed that SFL exhibits robustness against poisoning
attacks, with a fivefold improvement compared to FL in terms of robustness.
In this paper, we present a novel poisoning attack known as MISA. It poisons
both the top and bottom models, causing a \textbf{\underline{misa}}lignment in
the global model, ultimately leading to a drastic accuracy collapse. This
attack unveils the vulnerabilities in SFL, challenging the conventional belief
that SFL is robust against poisoning attacks. Extensive experiments demonstrate
that our proposed MISA poses a significant threat to the availability of SFL,
underscoring the imperative for academia and industry to accord this matter due
attention.
Related papers
- Enhancing Split Learning with Sharded and Blockchain-Enabled SplitFed Approaches [0.7911407896206765]
Collaborative and distributed learning techniques, such as Federated Learning (FL) and Split Learning (SL), hold significant promise for leveraging sensitive data in privacy-critical domains.<n>However, FL and SL suffer from key limitations -- FL imposes substantial computational demands on clients, while SL leads to prolonged training times.<n>To overcome these challenges, SplitFed Learning (SFL) was introduced as a hybrid approach that combines the strengths of FL and SL.
arXiv Detail & Related papers (2025-09-29T22:24:24Z) - Not All Edges are Equally Robust: Evaluating the Robustness of Ranking-Based Federated Learning [49.68790647579509]
Federated Ranking Learning (FRL) is a state-of-the-art FL framework that stands out for its communication efficiency and resilience to poisoning attacks.
We introduce a novel local model poisoning attack against FRL, namely the Vulnerable Edge Manipulation (VEM) attack.
Our attack achieves an overall 53.23% attack impact and is 3.7x more impactful than existing methods.
arXiv Detail & Related papers (2025-03-12T00:38:14Z) - FuseFL: One-Shot Federated Learning through the Lens of Causality with Progressive Model Fusion [48.90879664138855]
One-shot Federated Learning (OFL) significantly reduces communication costs in FL by aggregating trained models only once.
However, the performance of advanced OFL methods is far behind the normal FL.
We propose a novel learning approach to endow OFL with superb performance and low communication and storage costs, termed as FuseFL.
arXiv Detail & Related papers (2024-10-27T09:07:10Z) - R-SFLLM: Jamming Resilient Framework for Split Federated Learning with Large Language Models [83.77114091471822]
Split federated learning (SFL) is a compute-efficient paradigm in distributed machine learning (ML)
A challenge in SFL, particularly when deployed over wireless channels, is the susceptibility of transmitted model parameters to adversarial jamming.
This is particularly pronounced for word embedding parameters in large language models (LLMs), which are crucial for language understanding.
A physical layer framework is developed for resilient SFL with LLMs (R-SFLLM) over wireless networks.
arXiv Detail & Related papers (2024-07-16T12:21:29Z) - SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks [12.580891810557482]
Federated learning (FL) is attractive for pulling privacy-preserving distributed training data.
We propose a self-purified FL (SPFL) method that enables benign clients to exploit trusted historical features of locally purified model.
We experimentally demonstrate that SPFL outperforms state-of-the-art FL defenses against various poisoning attacks.
arXiv Detail & Related papers (2023-09-19T13:31:33Z) - Universal Adversarial Backdoor Attacks to Fool Vertical Federated
Learning in Cloud-Edge Collaboration [13.067285306737675]
This paper investigates the vulnerability of vertical federated learning (VFL) in the context of binary classification tasks.
We introduce a universal adversarial backdoor (UAB) attack to poison the predictions of VFL.
Our approach surpasses existing state-of-the-art methods, achieving up to 100% backdoor task performance.
arXiv Detail & Related papers (2023-04-22T15:31:15Z) - Model Extraction Attacks on Split Federated Learning [36.81477031150716]
Federated Learning (FL) is a popular collaborative learning scheme involving multiple clients and a server.
FL focuses on protecting clients' data but turns out to be highly vulnerable to Intellectual Property (IP) threats.
This paper shows how malicious clients can launch Model Extraction (ME) attacks by querying the gradient information from the server side.
arXiv Detail & Related papers (2023-03-13T20:21:51Z) - WW-FL: Secure and Private Large-Scale Federated Learning [15.412475066687723]
Federated learning (FL) is an efficient approach for large-scale distributed machine learning that promises data privacy by keeping training data on client devices.
Recent research has uncovered vulnerabilities in FL, impacting both security and privacy through poisoning attacks.
We propose WW-FL, an innovative framework that combines secure multi-party computation with hierarchical FL to guarantee data and global model privacy.
arXiv Detail & Related papers (2023-02-20T11:02:55Z) - Security Analysis of SplitFed Learning [22.38766677215997]
Split Learning (SL) and Federated Learning (FL) are two prominent distributed collaborative learning techniques.
Recent work has explored the security vulnerabilities of FL in the form of poisoning attacks.
In this paper, we perform the first ever empirical analysis of SplitFed's robustness to strong model poisoning attacks.
arXiv Detail & Related papers (2022-12-04T01:16:45Z) - Unraveling the Connections between Privacy and Certified Robustness in
Federated Learning Against Poisoning Attacks [68.20436971825941]
Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users.
Several studies have shown that FL is vulnerable to poisoning attacks.
To protect the privacy of local users, FL is usually trained in a differentially private way.
arXiv Detail & Related papers (2022-09-08T21:01:42Z) - DeFL: Decentralized Weight Aggregation for Cross-silo Federated Learning [2.43923223501858]
Federated learning (FL) is an emerging promising paradigm of privacy-preserving machine learning (ML)
We propose DeFL, a novel decentralized weight aggregation framework for cross-silo FL.
DeFL eliminates the central server by aggregating weights on each participating node and weights of only the current training round are maintained and synchronized among all nodes.
arXiv Detail & Related papers (2022-08-01T13:36:49Z) - Desirable Companion for Vertical Federated Learning: New Zeroth-Order
Gradient Based Algorithm [140.25480610981504]
A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy, communication, and computation efficiency.
We propose a novel VFL framework with black-box scalability, which is inseparably inseparably scalable.
arXiv Detail & Related papers (2022-03-19T13:55:47Z) - Achieving Personalized Federated Learning with Sparse Local Models [75.76854544460981]
Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
arXiv Detail & Related papers (2022-01-27T08:43:11Z) - FL-WBC: Enhancing Robustness against Model Poisoning Attacks in
Federated Learning from a Client Perspective [35.10520095377653]
Federated learning (FL) is a popular distributed learning framework that trains a global model through iterative communications between a central server and edge devices.
Recent works have demonstrated that FL is vulnerable to model poisoning attacks.
We propose a client-based defense, named White Blood Cell for Federated Learning (FL-WBC), which can mitigate model poisoning attacks.
arXiv Detail & Related papers (2021-10-26T17:13:35Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.