OLIVE: Oblivious Federated Learning on Trusted Execution Environment
against the risk of sparsification
- URL: http://arxiv.org/abs/2202.07165v5
- Date: Mon, 19 Jun 2023 13:54:11 GMT
- Title: OLIVE: Oblivious Federated Learning on Trusted Execution Environment
against the risk of sparsification
- Authors: Fumiyuki Kato, Yang Cao, Masatoshi Yoshikawa
- Abstract summary: This study focuses on the analysis of the vulnerabilities of server-side TEEs in Federated Learning and the defense.
First, we theoretically analyze the leakage of memory access patterns, revealing the risk of sparsified gradients.
Second, we devise an inference attack to link memory access patterns to sensitive information in the training dataset.
- Score: 22.579050671255846
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Combining Federated Learning (FL) with a Trusted Execution Environment (TEE)
is a promising approach for realizing privacy-preserving FL, which has garnered
significant academic attention in recent years. Implementing the TEE on the
server side enables each round of FL to proceed without exposing the client's
gradient information to untrusted servers. This addresses usability gaps in
existing secure aggregation schemes as well as utility gaps in differentially
private FL. However, to address the issue using a TEE, the vulnerabilities of
server-side TEEs need to be considered -- this has not been sufficiently
investigated in the context of FL. The main technical contribution of this
study is the analysis of the vulnerabilities of TEE in FL and the defense.
First, we theoretically analyze the leakage of memory access patterns,
revealing the risk of sparsified gradients, which are commonly used in FL to
enhance communication efficiency and model accuracy. Second, we devise an
inference attack to link memory access patterns to sensitive information in the
training dataset. Finally, we propose an oblivious yet efficient aggregation
algorithm to prevent memory access pattern leakage. Our experiments on
real-world data demonstrate that the proposed method functions efficiently in
practical scales.
Related papers
- Gradients Stand-in for Defending Deep Leakage in Federated Learning [0.0]
This study introduces a novel, efficacious method aimed at safeguarding against gradient leakage, namely, AdaDefense"
This proposed approach not only effectively prevents gradient leakage, but also ensures that the overall performance of the model remains largely unaffected.
arXiv Detail & Related papers (2024-10-11T11:44:13Z) - Privacy-Preserving Distributed Learning for Residential Short-Term Load
Forecasting [11.185176107646956]
Power system load data can inadvertently reveal the daily routines of residential users, posing a risk to their property security.
We introduce a Markovian Switching-based distributed training framework, the convergence of which is substantiated through rigorous theoretical analysis.
Case studies employing real-world power system load data validate the efficacy of our proposed algorithm.
arXiv Detail & Related papers (2024-02-02T16:39:08Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Enabling Quartile-based Estimated-Mean Gradient Aggregation As Baseline
for Federated Image Classifications [5.5099914877576985]
Federated Learning (FL) has revolutionized how we train deep neural networks by enabling decentralized collaboration while safeguarding sensitive data and improving model performance.
This paper introduces an innovative solution named Estimated Mean Aggregation (EMA) that not only addresses these challenges but also provides a fundamental reference point as a $mathsfbaseline$ for advanced aggregation techniques in FL systems.
arXiv Detail & Related papers (2023-09-21T17:17:28Z) - A Safe Genetic Algorithm Approach for Energy Efficient Federated
Learning in Wireless Communication Networks [53.561797148529664]
Federated Learning (FL) has emerged as a decentralized technique, where contrary to traditional centralized approaches, devices perform a model training in a collaborative manner.
Despite the existing efforts made in FL, its environmental impact is still under investigation, since several critical challenges regarding its applicability to wireless networks have been identified.
The current work proposes a Genetic Algorithm (GA) approach, targeting the minimization of both the overall energy consumption of an FL process and any unnecessary resource utilization.
arXiv Detail & Related papers (2023-06-25T13:10:38Z) - WW-FL: Secure and Private Large-Scale Federated Learning [15.412475066687723]
Federated learning (FL) is an efficient approach for large-scale distributed machine learning that promises data privacy by keeping training data on client devices.
Recent research has uncovered vulnerabilities in FL, impacting both security and privacy through poisoning attacks.
We propose WW-FL, an innovative framework that combines secure multi-party computation with hierarchical FL to guarantee data and global model privacy.
arXiv Detail & Related papers (2023-02-20T11:02:55Z) - Reliable Federated Disentangling Network for Non-IID Domain Feature [62.73267904147804]
In this paper, we propose a novel reliable federated disentangling network, termed RFedDis.
To the best of our knowledge, our proposed RFedDis is the first work to develop an FL approach based on evidential uncertainty combined with feature disentangling.
Our proposed RFedDis provides outstanding performance with a high degree of reliability as compared to other state-of-the-art FL approaches.
arXiv Detail & Related papers (2023-01-30T11:46:34Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.