Confidential Serverless Computing
- URL: http://arxiv.org/abs/2504.21518v2
- Date: Thu, 01 May 2025 07:09:22 GMT
- Title: Confidential Serverless Computing
- Authors: Patrick Sabanic, Masanori Misono, Teofil Bodea, Julian Pritzi, Michael Hackl, Dimitrios Stavrakakis, Pramod Bhatotia,
- Abstract summary: We present Hacher, a confidential computing system for secure serverless deployments.<n>By employing nested confidential execution and a decoupled guest OS within CVMs, Hacher runs each function in a minimal "trustlet"<n>Compared to CVM-based deployments, Hacher has 4.3x smaller TCB, improves end-to-end latency (15-93%), higher function density (up to 907x) and reduces inter-function communication (up to 27x) and function chaining latency (16.7-30.2x)
- Score: 1.7231099917090071
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although serverless computing offers compelling cost and deployment simplicity advantages, a significant challenge remains in securely managing sensitive data as it flows through the network of ephemeral function executions in serverless computing environments within untrusted clouds. While Confidential Virtual Machines (CVMs) offer a promising secure execution environment, their integration with serverless architectures currently faces fundamental limitations in key areas: security, performance, and resource efficiency. We present Hacher, a confidential computing system for secure serverless deployments to overcome these limitations. By employing nested confidential execution and a decoupled guest OS within CVMs, Hacher runs each function in a minimal "trustlet", significantly improving security through a reduced Trusted Computing Base (TCB). Furthermore, by leveraging a data-centric I/O architecture built upon a lightweight LibOS, Hacher optimizes network communication to address performance and resource efficiency challenges. Our evaluation shows that compared to CVM-based deployments, Hacher has 4.3x smaller TCB, improves end-to-end latency (15-93%), achieves higher function density (up to 907x), and reduces inter-function communication (up to 27x) and function chaining latency (16.7-30.2x); thus, Hacher offers a practical system for confidential serverless computing.
Related papers
- Lightweight, Secure and Stateful Serverless Computing with PSL [43.025002382616066]
We present Function-as-a-Serivce (F) framework for Trusted Execution Environments (TEEs)
The framework provides rich programming language support on heterogeneous TEE hardware for statically compiled binaries and/or WebAssembly (WASM) bytecodes.
It achieves near-native execution speeds by utilizing the dynamic memory mapping capabilities of Intel SGX2.
arXiv Detail & Related papers (2024-10-25T23:17:56Z) - BULKHEAD: Secure, Scalable, and Efficient Kernel Compartmentalization with PKS [16.239598954752594]
Kernel compartmentalization is a promising approach that follows the least-privilege principle.
We present BULKHEAD, a secure, scalable, and efficient kernel compartmentalization technique.
We implement a prototype system on Linux v6.1 to compartmentalize loadable kernel modules.
arXiv Detail & Related papers (2024-09-15T04:11:26Z) - Cabin: Confining Untrusted Programs within Confidential VMs [13.022056111810599]
Confidential computing safeguards sensitive computations from untrusted clouds.
CVMs often come with large and vulnerable operating system kernels, making them susceptible to attacks exploiting kernel weaknesses.
This study proposes Cabin, an isolated execution framework within guest VM utilizing the latest AMD SEV-SNP technology.
arXiv Detail & Related papers (2024-07-17T06:23:28Z) - Digital Twin-Assisted Data-Driven Optimization for Reliable Edge Caching in Wireless Networks [60.54852710216738]
We introduce a novel digital twin-assisted optimization framework, called D-REC, to ensure reliable caching in nextG wireless networks.
By incorporating reliability modules into a constrained decision process, D-REC can adaptively adjust actions, rewards, and states to comply with advantageous constraints.
arXiv Detail & Related papers (2024-06-29T02:40:28Z) - SSNet: A Lightweight Multi-Party Computation Scheme for Practical Privacy-Preserving Machine Learning Service in the Cloud [17.961150835215587]
We propose SSNet, which for the first time employs Shamir's secret sharing (SSS) as the backbone of MPC-based ML framework.
SSNet demonstrates the ability to scale up party numbers straightforwardly and embeds strategies to authenticate the computation correctness.
We conduct comprehensive experimental evaluations on commercial cloud computing infrastructure from Amazon AWS.
arXiv Detail & Related papers (2024-06-04T00:55:06Z) - Bridge the Future: High-Performance Networks in Confidential VMs without Trusted I/O devices [9.554247218443939]
Trusted I/O (TIO) is an appealing solution to improve I/O performance for confidential impact (CVMs)
This paper emphasizes that not all types of I/O can derive substantial benefits from TIO, particularly network I/O.
We present FOlio, a software solution crafted from a secure and efficient Data Plane Development Kit (DPDK) extension.
arXiv Detail & Related papers (2024-03-05T23:06:34Z) - Data-Oblivious ML Accelerators using Hardware Security Extensions [9.716425897388875]
Outsourced computation can put client data confidentiality at risk.
We develop Dolma, which applies DIFT to the Gemmini matrix multiplication accelerator.
We show that Dolma incurs low overheads for large configurations.
arXiv Detail & Related papers (2024-01-29T21:34:29Z) - HasTEE+ : Confidential Cloud Computing and Analytics with Haskell [50.994023665559496]
Confidential computing enables the protection of confidential code and data in a co-tenanted cloud deployment using specialized hardware isolation units called Trusted Execution Environments (TEEs)
TEEs offer low-level C/C++-based toolchains that are susceptible to inherent memory safety vulnerabilities and lack language constructs to monitor explicit and implicit information-flow leaks.
We address the above with HasTEE+, a domain-specific language (cla) embedded in Haskell that enables programming TEEs in a high-level language with strong type-safety.
arXiv Detail & Related papers (2024-01-17T00:56:23Z) - SOCI^+: An Enhanced Toolkit for Secure OutsourcedComputation on Integers [50.608828039206365]
We propose SOCI+ which significantly improves the performance of SOCI.
SOCI+ employs a novel (2, 2)-threshold Paillier cryptosystem with fast encryption and decryption as its cryptographic primitive.
Compared with SOCI, our experimental evaluation shows that SOCI+ is up to 5.4 times more efficient in computation and 40% less in communication overhead.
arXiv Detail & Related papers (2023-09-27T05:19:32Z) - Communication-Efficient Graph Neural Networks with Probabilistic
Neighborhood Expansion Analysis and Caching [59.8522166385372]
Training and inference with graph neural networks (GNNs) on massive graphs has been actively studied since the inception of GNNs.
This paper is concerned with minibatch training and inference with GNNs that employ node-wise sampling in distributed settings.
We present SALIENT++, which extends the prior state-of-the-art SALIENT system to work with partitioned feature data.
arXiv Detail & Related papers (2023-05-04T21:04:01Z) - Distributed Deep Learning Inference Acceleration using Seamless
Collaboration in Edge Computing [93.67044879636093]
This paper studies inference acceleration using distributed convolutional neural networks (CNNs) in collaborative edge computing.
We design a novel task collaboration scheme in which the overlapping zone of the sub-tasks on secondary edge servers (ESs) is executed on the host ES, named as HALP.
Experimental results show that HALP can accelerate CNN inference in VGG-16 by 1.7-2.0x for a single task and 1.7-1.8x for 4 tasks per batch on GTX 1080TI and JETSON AGX Xavier.
arXiv Detail & Related papers (2022-07-22T18:39:09Z) - Dynamic Split Computing for Efficient Deep Edge Intelligence [78.4233915447056]
We introduce dynamic split computing, where the optimal split location is dynamically selected based on the state of the communication channel.
We show that dynamic split computing achieves faster inference in edge computing environments where the data rate and server load vary over time.
arXiv Detail & Related papers (2022-05-23T12:35:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.