ResSFL: A Resistance Transfer Framework for Defending Model Inversion
Attack in Split Federated Learning
- URL: http://arxiv.org/abs/2205.04007v1
- Date: Mon, 9 May 2022 02:23:24 GMT
- Title: ResSFL: A Resistance Transfer Framework for Defending Model Inversion
Attack in Split Federated Learning
- Authors: Jingtao Li, Adnan Siraj Rakin, Xing Chen, Zhezhi He, Deliang Fan,
Chaitali Chakrabarti
- Abstract summary: Split Federated Learning (SFL) is a distributed training scheme where multiple clients send intermediate activations (i.e., feature map) instead of raw data to a central server.
Existing works on protecting SFL only consider inference and do not handle attacks during training.
We propose ResSFL, a Split Federated Learning Framework that is designed to be MI-resistant during training.
- Score: 34.891023451516304
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work aims to tackle Model Inversion (MI) attack on Split Federated
Learning (SFL). SFL is a recent distributed training scheme where multiple
clients send intermediate activations (i.e., feature map), instead of raw data,
to a central server. While such a scheme helps reduce the computational load at
the client end, it opens itself to reconstruction of raw data from intermediate
activation by the server. Existing works on protecting SFL only consider
inference and do not handle attacks during training. So we propose ResSFL, a
Split Federated Learning Framework that is designed to be MI-resistant during
training. It is based on deriving a resistant feature extractor via
attacker-aware training, and using this extractor to initialize the client-side
model prior to standard SFL training. Such a method helps in reducing the
computational complexity due to use of strong inversion model in client-side
adversarial training as well as vulnerability of attacks launched in early
training epochs. On CIFAR-100 dataset, our proposed framework successfully
mitigates MI attack on a VGG-11 model with a high reconstruction
Mean-Square-Error of 0.050 compared to 0.005 obtained by the baseline system.
The framework achieves 67.5% accuracy (only 1% accuracy drop) with very low
computation overhead. Code is released at:
https://github.com/zlijingtao/ResSFL.
Related papers
- BlindFL: Segmented Federated Learning with Fully Homomorphic Encryption [0.0]
Federated learning (FL) is a privacy-preserving edge-to-cloud technique used for training and deploying AI models on edge devices.
BlindFL is a framework for global model aggregation in which clients encrypt and send a subset of their local model update.
BlindFL significantly impedes client-side model poisoning attacks, a first for single-key, FHE-based FL schemes.
arXiv Detail & Related papers (2025-01-20T18:42:21Z) - Speed Up Federated Learning in Heterogeneous Environment: A Dynamic
Tiering Approach [5.504000607257414]
Federated learning (FL) enables collaboratively training a model while keeping the training data decentralized and private.
One significant impediment to training a model using FL, especially large models, is the resource constraints of devices with heterogeneous computation and communication capacities as well as varying task sizes.
We propose the Dynamic Tiering-based Federated Learning (DTFL) system where slower clients dynamically offload part of the model to the server to alleviate resource constraints and speed up training.
arXiv Detail & Related papers (2023-12-09T19:09:19Z) - When MiniBatch SGD Meets SplitFed Learning:Convergence Analysis and
Performance Evaluation [9.815046814597238]
Federated learning (FL) enables collaborative model training across distributed clients without sharing raw data.
SplitFed learning (SFL) is a recent distributed approach that alleviates computation workload at the client device by splitting the model at a cut layer into two parts.
MiniBatch-SFL incorporates MiniBatch SGD into SFL, where the clients train the client-side model in an FL fashion while the server trains the server-side model.
arXiv Detail & Related papers (2023-08-23T06:51:22Z) - One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training [54.622474306336635]
A new weight modification attack called bit flip attack (BFA) was proposed, which exploits memory fault inject techniques.
We propose a training-assisted bit flip attack, in which the adversary is involved in the training stage to build a high-risk model to release.
arXiv Detail & Related papers (2023-08-12T09:34:43Z) - Model Extraction Attacks on Split Federated Learning [36.81477031150716]
Federated Learning (FL) is a popular collaborative learning scheme involving multiple clients and a server.
FL focuses on protecting clients' data but turns out to be highly vulnerable to Intellectual Property (IP) threats.
This paper shows how malicious clients can launch Model Extraction (ME) attacks by querying the gradient information from the server side.
arXiv Detail & Related papers (2023-03-13T20:21:51Z) - FedDBL: Communication and Data Efficient Federated Deep-Broad Learning
for Histopathological Tissue Classification [65.7405397206767]
We propose Federated Deep-Broad Learning (FedDBL) to achieve superior classification performance with limited training samples and only one-round communication.
FedDBL greatly outperforms the competitors with only one-round communication and limited training samples, while it even achieves comparable performance with the ones under multiple-round communications.
Since no data or deep model sharing across different clients, the privacy issue is well-solved and the model security is guaranteed with no model inversion attack risk.
arXiv Detail & Related papers (2023-02-24T14:27:41Z) - BAFFLE: A Baseline of Backpropagation-Free Federated Learning [71.09425114547055]
Federated learning (FL) is a general principle for decentralized clients to train a server model collectively without sharing local data.
We develop backpropagation-free federated learning, dubbed BAFFLE, in which backpropagation is replaced by multiple forward processes to estimate gradients.
BAFFLE is 1) memory-efficient and easily fits uploading bandwidth; 2) compatible with inference-only hardware optimization and model quantization or pruning; and 3) well-suited to trusted execution environments.
arXiv Detail & Related papers (2023-01-28T13:34:36Z) - Shielding Federated Learning Systems against Inference Attacks with ARM
TrustZone [0.0]
Federated Learning (FL) opens new perspectives for training machine learning models while keeping personal data on the users premises.
The long list of inference attacks that leak private data from gradients, published in the recent years, have emphasized the need of devising effective protection mechanisms.
We present GradSec, a solution that allows protecting in a TEE only sensitive layers of a machine learning model.
arXiv Detail & Related papers (2022-08-11T15:53:07Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.