A Comprehensive Trusted Runtime for WebAssembly with Intel SGX
- URL: http://arxiv.org/abs/2312.09087v1
- Date: Thu, 14 Dec 2023 16:19:00 GMT
- Title: A Comprehensive Trusted Runtime for WebAssembly with Intel SGX
- Authors: Jämes Ménétrey, Marcelo Pasin, Pascal Felber, Valerio Schiavoni, Giovanni Mazzeo, Arne Hollum, Darshan Vaydia,
- Abstract summary: We present Twine, a trusted runtime for running WebAssembly-compiled applications within TEEs.
It extends the standard WebAssembly system interface (WASI), providing controlled OS services, focusing on I/O.
We evaluate its performance using general-purpose benchmarks and real-world applications, showing it compares on par with state-of-the-art solutions.
- Score: 2.6732136954707792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In real-world scenarios, trusted execution environments (TEEs) frequently host applications that lack the trust of the infrastructure provider, as well as data owners who have specifically outsourced their data for remote processing. We present Twine, a trusted runtime for running WebAssembly-compiled applications within TEEs, establishing a two-way sandbox. Twine leverages memory safety guarantees of WebAssembly (Wasm) and abstracts the complexity of TEEs, empowering the execution of legacy and language-agnostic applications. It extends the standard WebAssembly system interface (WASI), providing controlled OS services, focusing on I/O. Additionally, through built-in TEE mechanisms, Twine delivers attestation capabilities to ensure the integrity of the runtime and the OS services supplied to the application. We evaluate its performance using general-purpose benchmarks and real-world applications, showing it compares on par with state-of-the-art solutions. A case study involving fintech company Credora reveals that Twine can be deployed in production with reasonable performance trade-offs, ranging from a 0.7x slowdown to a 1.17x speedup compared to native run time. Finally, we identify performance improvement through library optimisation, showcasing one such adjustment that leads up to 4.1x speedup. Twine is open-source and has been upstreamed into the original Wasm runtime, WAMR.
Related papers
- Chat AI: A Seamless Slurm-Native Solution for HPC-Based Services [0.3124884279860061]
Large language models (LLMs) allow researchers to run open-source or custom fine-tuned LLMs and ensure users that their data remains private and is not stored without their consent.
We propose an implementation consisting of a web service that runs on a cloud VM with secure access to a scalable backend running a multitude of AI models on HPC systems.
In order to ensure the security of the HPC system, we use the SSH ForceCommand directive to construct a robust circuit breaker.
arXiv Detail & Related papers (2024-06-27T12:08:21Z) - SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering [79.07755560048388]
SWE-agent is a system that facilitates LM agents to autonomously use computers to solve software engineering tasks.
SWE-agent's custom agent-computer interface (ACI) significantly enhances an agent's ability to create and edit code files, navigate entire repositories, and execute tests and other programs.
We evaluate SWE-agent on SWE-bench and HumanEvalFix, achieving state-of-the-art performance on both with a pass@1 rate of 12.5% and 87.7%, respectively.
arXiv Detail & Related papers (2024-05-06T17:41:33Z) - OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments [87.41051677852231]
We introduce OSWorld, the first-of-its-kind scalable, real computer environment for multimodal agents.
OSWorld can serve as a unified, integrated computer environment for assessing open-ended computer tasks.
We create a benchmark of 369 computer tasks involving real web and desktop apps in open domains, OS file I/O, and spanning multiple applications.
arXiv Detail & Related papers (2024-04-11T17:56:05Z) - Green AI: A Preliminary Empirical Study on Energy Consumption in DL
Models Across Different Runtime Infrastructures [56.200335252600354]
It is common practice to deploy pre-trained models on environments distinct from their native development settings.
This led to the introduction of interchange formats such as ONNX, which includes its infrastructure, and ONNX, which work as standard formats.
arXiv Detail & Related papers (2024-02-21T09:18:44Z) - HasTEE+ : Confidential Cloud Computing and Analytics with Haskell [50.994023665559496]
Confidential computing enables the protection of confidential code and data in a co-tenanted cloud deployment using specialized hardware isolation units called Trusted Execution Environments (TEEs)
TEEs offer low-level C/C++-based toolchains that are susceptible to inherent memory safety vulnerabilities and lack language constructs to monitor explicit and implicit information-flow leaks.
We address the above with HasTEE+, a domain-specific language (cla) embedded in Haskell that enables programming TEEs in a high-level language with strong type-safety.
arXiv Detail & Related papers (2024-01-17T00:56:23Z) - DeepSpeed-FastGen: High-throughput Text Generation for LLMs via MII and
DeepSpeed-Inference [23.49242865222089]
This paper introduces DeepSpeed-FastGen, a system that delivers up to 2.3x higher effective throughput, 2x lower latency on average, and up to 3.7x lower (token-level) tail latency.
We leverage a synergistic combination of DeepSpeed-MII and DeepSpeed-Inference to provide an efficient and easy-to-use serving system for large language models.
arXiv Detail & Related papers (2024-01-09T06:49:40Z) - A Holistic Approach for Trustworthy Distributed Systems with WebAssembly and TEEs [2.0198678236144474]
This paper introduces a novel approach using WebAssembly to address these issues.
We present the design of a portable and fully attested publish/subscribe system as a holistic approach.
Our experimental results showcase most overheads, revealing a 1.55x decrease in message throughput when using a trusted broker.
arXiv Detail & Related papers (2023-12-01T16:37:48Z) - Managing Large Enclaves in a Data Center [3.174768030369157]
We present OptMig, an end-to-end solution for live migrating large memory footprints in TEE-enabled applications.
Our approach does not require a developer to modify the application; however, we need a short, separate compilation pass and specialized software library support.
arXiv Detail & Related papers (2023-11-13T00:08:37Z) - Putting a Padlock on Lambda -- Integrating vTPMs into AWS Firecracker [49.1574468325115]
Software services place implicit trust in the cloud provider, without an explicit trust relationship.
There is currently no cloud provider that exposes Trusted Platform Module capabilities.
We improve trust by integrating a virtual TPM device into the Firecracker, originally developed by Amazon Web Services.
arXiv Detail & Related papers (2023-10-05T13:13:55Z) - Reproducible Performance Optimization of Complex Applications on the
Edge-to-Cloud Continuum [55.6313942302582]
We propose a methodology to support the optimization of real-life applications on the Edge-to-Cloud Continuum.
Our approach relies on a rigorous analysis of possible configurations in a controlled testbed environment to understand their behaviour.
Our methodology can be generalized to other applications in the Edge-to-Cloud Continuum.
arXiv Detail & Related papers (2021-08-04T07:35:14Z) - Intelligent colocation of HPC workloads [0.0]
Many HPC applications suffer from a bottleneck in the shared caches, instruction execution units, I/O or memory bandwidth, even though the remaining resources may be underutilized.
It is hard for developers and runtime systems to ensure that all critical resources are fully exploited by a single application, so an attractive technique is to colocate multiple applications on the same server.
We show that server efficiency can be improved by first modeling the expected performance degradation of colocated applications based on measured hardware performance counters.
arXiv Detail & Related papers (2021-03-16T12:35:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.