SecFL: Confidential Federated Learning using TEEs
- URL: http://arxiv.org/abs/2110.00981v2
- Date: Thu, 7 Oct 2021 07:20:34 GMT
- Title: SecFL: Confidential Federated Learning using TEEs
- Authors: Do Le Quoc and Christof Fetzer
- Abstract summary: We propose SecFL - a confidential federated learning framework that leverages Trusted Execution Environments (TEEs)
SecFL performs the global and local training inside TEE enclaves to ensure the confidentiality and integrity of the computations against powerful adversaries with privileged access.
- Score: 1.8148198154149398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is an emerging machine learning paradigm that enables
multiple clients to jointly train a model to take benefits from diverse
datasets from the clients without sharing their local training datasets. FL
helps reduce data privacy risks. Unfortunately, FL still exist several issues
regarding privacy and security. First, it is possible to leak sensitive
information from the shared training parameters. Second, malicious clients can
collude with each other to steal data, models from regular clients or corrupt
the global training model. To tackle these challenges, we propose SecFL - a
confidential federated learning framework that leverages Trusted Execution
Environments (TEEs). SecFL performs the global and local training inside TEE
enclaves to ensure the confidentiality and integrity of the computations
against powerful adversaries with privileged access. SecFL provides a
transparent remote attestation mechanism, relying on the remote attestation
provided by TEEs, to allow clients to attest the global training computation as
well as the local training computation of each other. Thus, all malicious
clients can be detected using the remote attestation mechanisms.
Related papers
- Can Federated Learning Safeguard Private Data in LLM Training? Vulnerabilities, Attacks, and Defense Evaluation [20.37072541084284]
Federated learning (FL) enables clients to retain local data while sharing only model parameters for collaborative training.<n>We show that attackers can still extract training data from the global model, even using straightforward generation methods.<n>We introduce an enhanced attack strategy tailored to FL, which tracks global model updates during training to intensify privacy leakage.
arXiv Detail & Related papers (2025-09-25T02:28:08Z) - BadFU: Backdoor Federated Learning through Adversarial Machine Unlearning [7.329446721934861]
Federated learning (FL) has been widely adopted as a decentralized training paradigm.<n>In this paper, we present the first backdoor attack in the context of federated unlearning.
arXiv Detail & Related papers (2025-08-21T13:17:01Z) - FuSeFL: Fully Secure and Scalable Cross-Silo Federated Learning [0.8686220240511062]
Federated Learning (FL) enables collaborative model training without centralizing client data, making it attractive for privacy-sensitive domains.<n>We present FuSeFL, a fully secure and scalable FL scheme designed for cross-silo settings.
arXiv Detail & Related papers (2025-07-18T00:50:44Z) - Toward Malicious Clients Detection in Federated Learning [24.72033419379761]
Federated learning (FL) enables multiple clients to collaboratively train a global machine learning model without sharing their raw data.<n>In this paper, we propose a novel algorithm, SafeFL, specifically designed to accurately identify malicious clients in FL.
arXiv Detail & Related papers (2025-05-14T03:36:36Z) - RLSA-PFL: Robust Lightweight Secure Aggregation with Model Inconsistency Detection in Privacy-Preserving Federated Learning [12.804623314091508]
Federated Learning (FL) allows users to collaboratively train a global machine learning model by sharing local model only, without exposing their private data to a central server.
Study have revealed privacy vulnerabilities in FL, where adversaries can potentially infer sensitive information from the shared model parameters.
We present an efficient masking-based secure aggregation scheme utilizing lightweight cryptographic primitives to privacy risks.
arXiv Detail & Related papers (2025-02-13T06:01:09Z) - ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - SaFL: Sybil-aware Federated Learning with Application to Face
Recognition [13.914187113334222]
Federated Learning (FL) is a machine learning paradigm to conduct collaborative learning among clients on a joint model.
On the downside, FL raises security and privacy concerns that have just started to be studied.
This paper proposes a new defense method against poisoning attacks in FL called SaFL.
arXiv Detail & Related papers (2023-11-07T21:06:06Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - Active Membership Inference Attack under Local Differential Privacy in
Federated Learning [18.017082794703555]
Federated learning (FL) was originally regarded as a framework for collaborative learning among clients with data privacy protection.
We propose a new active membership inference (AMI) attack carried out by a dishonest server in FL.
arXiv Detail & Related papers (2023-02-24T15:21:39Z) - WW-FL: Secure and Private Large-Scale Federated Learning [15.412475066687723]
Federated learning (FL) is an efficient approach for large-scale distributed machine learning that promises data privacy by keeping training data on client devices.
Recent research has uncovered vulnerabilities in FL, impacting both security and privacy through poisoning attacks.
We propose WW-FL, an innovative framework that combines secure multi-party computation with hierarchical FL to guarantee data and global model privacy.
arXiv Detail & Related papers (2023-02-20T11:02:55Z) - DReS-FL: Dropout-Resilient Secure Federated Learning for Non-IID Clients
via Secret Data Sharing [7.573516684862637]
Federated learning (FL) strives to enable collaborative training of machine learning models without centrally collecting clients' private data.
This paper proposes a Dropout-Resilient Secure Federated Learning framework based on Lagrange computing.
We show that DReS-FL is resilient to client dropouts and provides privacy protection for the local datasets.
arXiv Detail & Related papers (2022-10-06T05:04:38Z) - Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM [62.62684911017472]
Federated learning (FL) enables devices to jointly train shared models while keeping the training data local for privacy purposes.
We introduce a VFL framework with multiple heads (VIM), which takes the separate contribution of each client into account.
VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-20T23:14:33Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.