Model Extraction Attacks on Split Federated Learning
- URL: http://arxiv.org/abs/2303.08581v1
- Date: Mon, 13 Mar 2023 20:21:51 GMT
- Title: Model Extraction Attacks on Split Federated Learning
- Authors: Jingtao Li, Adnan Siraj Rakin, Xing Chen, Li Yang, Zhezhi He, Deliang
Fan, Chaitali Chakrabarti
- Abstract summary: Federated Learning (FL) is a popular collaborative learning scheme involving multiple clients and a server.
FL focuses on protecting clients' data but turns out to be highly vulnerable to Intellectual Property (IP) threats.
This paper shows how malicious clients can launch Model Extraction (ME) attacks by querying the gradient information from the server side.
- Score: 36.81477031150716
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a popular collaborative learning scheme involving
multiple clients and a server. FL focuses on protecting clients' data but turns
out to be highly vulnerable to Intellectual Property (IP) threats. Since FL
periodically collects and distributes the model parameters, a free-rider can
download the latest model and thus steal model IP. Split Federated Learning
(SFL), a recent variant of FL that supports training with resource-constrained
clients, splits the model into two, giving one part of the model to clients
(client-side model), and the remaining part to the server (server-side model).
Thus SFL prevents model leakage by design. Moreover, by blocking prediction
queries, it can be made resistant to advanced IP threats such as traditional
Model Extraction (ME) attacks. While SFL is better than FL in terms of
providing IP protection, it is still vulnerable. In this paper, we expose the
vulnerability of SFL and show how malicious clients can launch ME attacks by
querying the gradient information from the server side. We propose five
variants of ME attack which differs in the gradient usage as well as in the
data assumptions. We show that under practical cases, the proposed ME attacks
work exceptionally well for SFL. For instance, when the server-side model has
five layers, our proposed ME attack can achieve over 90% accuracy with less
than 2% accuracy degradation with VGG-11 on CIFAR-10.
Related papers
- BlindFL: Segmented Federated Learning with Fully Homomorphic Encryption [0.0]
Federated learning (FL) is a privacy-preserving edge-to-cloud technique used for training and deploying AI models on edge devices.
BlindFL is a framework for global model aggregation in which clients encrypt and send a subset of their local model update.
BlindFL significantly impedes client-side model poisoning attacks, a first for single-key, FHE-based FL schemes.
arXiv Detail & Related papers (2025-01-20T18:42:21Z) - FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive
Models [2.7539214125526534]
Federated Learning (FL) thrives in training a global model with numerous clients.
Recent research proposed poisoning attacks that cause a catastrophic loss in the accuracy of the global model.
We propose FLGuard, a novel byzantine-robust FL method that detects malicious clients and discards malicious local updates.
arXiv Detail & Related papers (2024-03-05T10:36:27Z) - Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning [51.26221422507554]
Federated learning (FL) is an effective collaborative learning framework to coordinate data and computation resources from massive and distributed clients in training.
Such collaboration results in non-trivial intellectual property (IP) represented by the model parameters that should be protected and shared by the whole party rather than an individual user.
To block such IP leakage, it is essential to make the IP identifiable in the shared model and locate the anonymous infringer who first leaks it.
We propose Decodable Unique Watermarking (DUW) for complying with the requirements of accountable FL.
arXiv Detail & Related papers (2023-12-06T00:47:55Z) - When MiniBatch SGD Meets SplitFed Learning:Convergence Analysis and
Performance Evaluation [9.815046814597238]
Federated learning (FL) enables collaborative model training across distributed clients without sharing raw data.
SplitFed learning (SFL) is a recent distributed approach that alleviates computation workload at the client device by splitting the model at a cut layer into two parts.
MiniBatch-SFL incorporates MiniBatch SGD into SFL, where the clients train the client-side model in an FL fashion while the server trains the server-side model.
arXiv Detail & Related papers (2023-08-23T06:51:22Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - BAFFLE: A Baseline of Backpropagation-Free Federated Learning [71.09425114547055]
Federated learning (FL) is a general principle for decentralized clients to train a server model collectively without sharing local data.
We develop backpropagation-free federated learning, dubbed BAFFLE, in which backpropagation is replaced by multiple forward processes to estimate gradients.
BAFFLE is 1) memory-efficient and easily fits uploading bandwidth; 2) compatible with inference-only hardware optimization and model quantization or pruning; and 3) well-suited to trusted execution environments.
arXiv Detail & Related papers (2023-01-28T13:34:36Z) - Shielding Federated Learning Systems against Inference Attacks with ARM
TrustZone [0.0]
Federated Learning (FL) opens new perspectives for training machine learning models while keeping personal data on the users premises.
The long list of inference attacks that leak private data from gradients, published in the recent years, have emphasized the need of devising effective protection mechanisms.
We present GradSec, a solution that allows protecting in a TEE only sensitive layers of a machine learning model.
arXiv Detail & Related papers (2022-08-11T15:53:07Z) - ResSFL: A Resistance Transfer Framework for Defending Model Inversion
Attack in Split Federated Learning [34.891023451516304]
Split Federated Learning (SFL) is a distributed training scheme where multiple clients send intermediate activations (i.e., feature map) instead of raw data to a central server.
Existing works on protecting SFL only consider inference and do not handle attacks during training.
We propose ResSFL, a Split Federated Learning Framework that is designed to be MI-resistant during training.
arXiv Detail & Related papers (2022-05-09T02:23:24Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - BlockFLA: Accountable Federated Learning via Hybrid Blockchain
Architecture [11.908715869667445]
Federated Learning (FL) is a distributed, and decentralized machine learning protocol.
It has been shown that an attacker can inject backdoors to the trained model during FL.
We develop a hybrid blockchain-based FL framework that uses smart contracts to automatically detect, and punish the attackers.
arXiv Detail & Related papers (2020-10-14T22:43:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.