A performance analysis of VM-based Trusted Execution Environments for Confidential Federated Learning
- URL: http://arxiv.org/abs/2501.11558v1
- Date: Mon, 20 Jan 2025 15:58:48 GMT
- Title: A performance analysis of VM-based Trusted Execution Environments for Confidential Federated Learning
- Authors: Bruno Casella,
- Abstract summary: Federated Learning (FL) is a distributed machine learning approach that has emerged as an effective way to address recent privacy concerns.<n>FL introduces the need for additional security measures as FL alone is still subject to vulnerabilities such as model and data poisoning and inference attacks.<n> Confidential Computing (CC) is a paradigm that, by leveraging hardware-based trusted execution environments (TEEs), protects the confidentiality and integrity of ML models and data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a distributed machine learning approach that has emerged as an effective way to address recent privacy concerns. However, FL introduces the need for additional security measures as FL alone is still subject to vulnerabilities such as model and data poisoning and inference attacks. Confidential Computing (CC) is a paradigm that, by leveraging hardware-based trusted execution environments (TEEs), protects the confidentiality and integrity of ML models and data, thus resulting in a powerful ally of FL applications. Typical TEEs offer an application-isolation level but suffer many drawbacks, such as limited available memory and debugging and coding difficulties. The new generation of TEEs offers a virtual machine (VM)-based isolation level, thus reducing the porting effort for existing applications. In this work, we compare the performance of VM-based and application-isolation level TEEs for confidential FL (CFL) applications. In particular, we evaluate the impact of TEEs and additional security mechanisms such as TLS (for securing the communication channel). The results, obtained across three datasets and two deep learning models, demonstrate that the new VM-based TEEs introduce a limited overhead (at most 1.5x), thus paving the way to leverage public and untrusted computing environments, such as HPC facilities or public cloud, without detriment to performance.
Related papers
- R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning [97.49610356913874]
We propose a robust test-time prompt tuning (R-TPT) for vision-language models (VLMs)
R-TPT mitigates the impact of adversarial attacks during the inference stage.
We introduce a plug-and-play reliability-based weighted ensembling strategy to strengthen the defense.
arXiv Detail & Related papers (2025-04-15T13:49:31Z) - OFL: Opportunistic Federated Learning for Resource-Heterogeneous and Privacy-Aware Devices [8.037795580924163]
Opportunistic Federated Learning (OFL) is a novel FL framework designed explicitly for resource-heterogenous and privacy-aware FL devices.
OFL provides strong security by introducing a differentially private and opportunistic model updating mechanism.
OFL achieves satisfying model performance and improves efficiency and security, outperforming existing solutions.
arXiv Detail & Related papers (2025-03-19T09:12:47Z) - Forgetting Any Data at Any Time: A Theoretically Certified Unlearning Framework for Vertical Federated Learning [8.127710748771992]
We introduce the first VFL framework with theoretically guaranteed unlearning capabilities.
Unlike prior approaches, our solution is model- and data-agnostic, offering universal compatibility.
Our framework supports asynchronous unlearning, eliminating the need for all parties to be simultaneously online during the forgetting process.
arXiv Detail & Related papers (2025-02-24T11:52:35Z) - Digital Twin-Assisted Federated Learning with Blockchain in Multi-tier Computing Systems [67.14406100332671]
In Industry 4.0 systems, resource-constrained edge devices engage in frequent data interactions.
This paper proposes a digital twin (DT) and federated digital twin (FL) scheme.
The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis.
arXiv Detail & Related papers (2024-11-04T17:48:02Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - HasTEE+ : Confidential Cloud Computing and Analytics with Haskell [50.994023665559496]
Confidential computing enables the protection of confidential code and data in a co-tenanted cloud deployment using specialized hardware isolation units called Trusted Execution Environments (TEEs)
TEEs offer low-level C/C++-based toolchains that are susceptible to inherent memory safety vulnerabilities and lack language constructs to monitor explicit and implicit information-flow leaks.
We address the above with HasTEE+, a domain-specific language (cla) embedded in Haskell that enables programming TEEs in a high-level language with strong type-safety.
arXiv Detail & Related papers (2024-01-17T00:56:23Z) - Federated Learning: A Cutting-Edge Survey of the Latest Advancements and Applications [6.042202852003457]
Federated learning (FL) is a technique for developing robust machine learning (ML) models.
To protect user privacy, FL requires users to send model updates rather than transmitting large quantities of raw and potentially confidential data.
This survey provides a comprehensive analysis and comparison of the most recent FL algorithms.
arXiv Detail & Related papers (2023-10-08T19:54:26Z) - Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly [62.473245910234304]
This paper takes a hardware-centric approach to explore how Large Language Models can be brought to modern edge computing systems.
We provide a micro-level hardware benchmark, compare the model FLOP utilization to a state-of-the-art data center GPU, and study the network utilization in realistic conditions.
arXiv Detail & Related papers (2023-10-04T20:27:20Z) - FLEdge: Benchmarking Federated Machine Learning Applications in Edge Computing Systems [61.335229621081346]
Federated Learning (FL) has become a viable technique for realizing privacy-enhancing distributed deep learning on the network edge.
In this paper, we propose FLEdge, which complements existing FL benchmarks by enabling a systematic evaluation of client capabilities.
arXiv Detail & Related papers (2023-06-08T13:11:20Z) - Shielding Federated Learning Systems against Inference Attacks with ARM
TrustZone [0.0]
Federated Learning (FL) opens new perspectives for training machine learning models while keeping personal data on the users premises.
The long list of inference attacks that leak private data from gradients, published in the recent years, have emphasized the need of devising effective protection mechanisms.
We present GradSec, a solution that allows protecting in a TEE only sensitive layers of a machine learning model.
arXiv Detail & Related papers (2022-08-11T15:53:07Z) - OLIVE: Oblivious Federated Learning on Trusted Execution Environment
against the risk of sparsification [22.579050671255846]
This study focuses on the analysis of the vulnerabilities of server-side TEEs in Federated Learning and the defense.
First, we theoretically analyze the leakage of memory access patterns, revealing the risk of sparsified gradients.
Second, we devise an inference attack to link memory access patterns to sensitive information in the training dataset.
arXiv Detail & Related papers (2022-02-15T03:23:57Z) - Perun: Secure Multi-Stakeholder Machine Learning Framework with GPU
Support [1.5362025549031049]
Perun is a framework for confidential multi-stakeholder machine learning.
It executes ML training on hardware accelerators (e.g., GPU) while providing security guarantees.
During the ML training on CIFAR-10 and real-world medical datasets, Perun achieved a 161x to 1560x speedup.
arXiv Detail & Related papers (2021-03-31T08:31:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.