A performance analysis of VM-based Trusted Execution Environments for Confidential Federated Learning
- URL: http://arxiv.org/abs/2501.11558v1
- Date: Mon, 20 Jan 2025 15:58:48 GMT
- Title: A performance analysis of VM-based Trusted Execution Environments for Confidential Federated Learning
- Authors: Bruno Casella,
- Abstract summary: Federated Learning (FL) is a distributed machine learning approach that has emerged as an effective way to address recent privacy concerns.
FL introduces the need for additional security measures as FL alone is still subject to vulnerabilities such as model and data poisoning and inference attacks.
Confidential Computing (CC) is a paradigm that, by leveraging hardware-based trusted execution environments (TEEs), protects the confidentiality and integrity of ML models and data.
- Score: 0.0
- License:
- Abstract: Federated Learning (FL) is a distributed machine learning approach that has emerged as an effective way to address recent privacy concerns. However, FL introduces the need for additional security measures as FL alone is still subject to vulnerabilities such as model and data poisoning and inference attacks. Confidential Computing (CC) is a paradigm that, by leveraging hardware-based trusted execution environments (TEEs), protects the confidentiality and integrity of ML models and data, thus resulting in a powerful ally of FL applications. Typical TEEs offer an application-isolation level but suffer many drawbacks, such as limited available memory and debugging and coding difficulties. The new generation of TEEs offers a virtual machine (VM)-based isolation level, thus reducing the porting effort for existing applications. In this work, we compare the performance of VM-based and application-isolation level TEEs for confidential FL (CFL) applications. In particular, we evaluate the impact of TEEs and additional security mechanisms such as TLS (for securing the communication channel). The results, obtained across three datasets and two deep learning models, demonstrate that the new VM-based TEEs introduce a limited overhead (at most 1.5x), thus paving the way to leverage public and untrusted computing environments, such as HPC facilities or public cloud, without detriment to performance.
Related papers
- RLSA-PFL: Robust Lightweight Secure Aggregation with Model Inconsistency Detection in Privacy-Preserving Federated Learning [13.117628927803985]
Federated Learning (FL) allows users to collaboratively train a global machine learning model by sharing local model only, without exposing their private data to a central server.
Study have revealed privacy vulnerabilities in FL, where adversaries can potentially infer sensitive information from the shared model parameters.
We present an efficient masking-based secure aggregation scheme utilizing lightweight cryptographic primitives to privacy risks.
arXiv Detail & Related papers (2025-02-13T06:01:09Z) - Digital Twin-Assisted Federated Learning with Blockchain in Multi-tier Computing Systems [67.14406100332671]
In Industry 4.0 systems, resource-constrained edge devices engage in frequent data interactions.
This paper proposes a digital twin (DT) and federated digital twin (FL) scheme.
The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis.
arXiv Detail & Related papers (2024-11-04T17:48:02Z) - Advances in APPFL: A Comprehensive and Extensible Federated Learning Framework [1.4206132527980742]
Federated learning (FL) is a distributed machine learning paradigm enabling collaborative model training while preserving data privacy.
We present the recent advances in developing APPFL, a framework and benchmarking suite for federated learning.
We demonstrate the capabilities of APPFL through extensive experiments evaluating various aspects of FL, including communication efficiency, privacy preservation, computational performance, and resource utilization.
arXiv Detail & Related papers (2024-09-17T22:20:26Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - HasTEE+ : Confidential Cloud Computing and Analytics with Haskell [50.994023665559496]
Confidential computing enables the protection of confidential code and data in a co-tenanted cloud deployment using specialized hardware isolation units called Trusted Execution Environments (TEEs)
TEEs offer low-level C/C++-based toolchains that are susceptible to inherent memory safety vulnerabilities and lack language constructs to monitor explicit and implicit information-flow leaks.
We address the above with HasTEE+, a domain-specific language (cla) embedded in Haskell that enables programming TEEs in a high-level language with strong type-safety.
arXiv Detail & Related papers (2024-01-17T00:56:23Z) - Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly [62.473245910234304]
This paper takes a hardware-centric approach to explore how Large Language Models can be brought to modern edge computing systems.
We provide a micro-level hardware benchmark, compare the model FLOP utilization to a state-of-the-art data center GPU, and study the network utilization in realistic conditions.
arXiv Detail & Related papers (2023-10-04T20:27:20Z) - FLEdge: Benchmarking Federated Machine Learning Applications in Edge Computing Systems [61.335229621081346]
Federated Learning (FL) has become a viable technique for realizing privacy-enhancing distributed deep learning on the network edge.
In this paper, we propose FLEdge, which complements existing FL benchmarks by enabling a systematic evaluation of client capabilities.
arXiv Detail & Related papers (2023-06-08T13:11:20Z) - Shielding Federated Learning Systems against Inference Attacks with ARM
TrustZone [0.0]
Federated Learning (FL) opens new perspectives for training machine learning models while keeping personal data on the users premises.
The long list of inference attacks that leak private data from gradients, published in the recent years, have emphasized the need of devising effective protection mechanisms.
We present GradSec, a solution that allows protecting in a TEE only sensitive layers of a machine learning model.
arXiv Detail & Related papers (2022-08-11T15:53:07Z) - Desirable Companion for Vertical Federated Learning: New Zeroth-Order
Gradient Based Algorithm [140.25480610981504]
A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy, communication, and computation efficiency.
We propose a novel VFL framework with black-box scalability, which is inseparably inseparably scalable.
arXiv Detail & Related papers (2022-03-19T13:55:47Z) - OLIVE: Oblivious Federated Learning on Trusted Execution Environment
against the risk of sparsification [22.579050671255846]
This study focuses on the analysis of the vulnerabilities of server-side TEEs in Federated Learning and the defense.
First, we theoretically analyze the leakage of memory access patterns, revealing the risk of sparsified gradients.
Second, we devise an inference attack to link memory access patterns to sensitive information in the training dataset.
arXiv Detail & Related papers (2022-02-15T03:23:57Z) - Perun: Secure Multi-Stakeholder Machine Learning Framework with GPU
Support [1.5362025549031049]
Perun is a framework for confidential multi-stakeholder machine learning.
It executes ML training on hardware accelerators (e.g., GPU) while providing security guarantees.
During the ML training on CIFAR-10 and real-world medical datasets, Perun achieved a 161x to 1560x speedup.
arXiv Detail & Related papers (2021-03-31T08:31:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.