WeSee: Using Malicious #VC Interrupts to Break AMD SEV-SNP
- URL: http://arxiv.org/abs/2404.03526v1
- Date: Thu, 4 Apr 2024 15:30:13 GMT
- Title: WeSee: Using Malicious #VC Interrupts to Break AMD SEV-SNP
- Authors: Benedict Schlüter, Supraja Sridhara, Andrin Bertschi, Shweta Shinde,
- Abstract summary: AMD SEV-SNP offers VM-level trusted execution environments (TEEs) to protect sensitive cloud workloads.
WeSee attack injects malicious #VC into a victim VM's CPU to compromise the security guarantees of AMD SEV-SNP.
Case-studies demonstrate that WeSee can leak sensitive VM information (kTLS keys for NGINX), corrupt kernel data (firewall rules), and inject arbitrary code.
- Score: 2.8436446946726557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AMD SEV-SNP offers VM-level trusted execution environments (TEEs) to protect the confidentiality and integrity for sensitive cloud workloads from untrusted hypervisor controlled by the cloud provider. AMD introduced a new exception, #VC, to facilitate the communication between the VM and the untrusted hypervisor. We present WeSee attack, where the hypervisor injects malicious #VC into a victim VM's CPU to compromise the security guarantees of AMD SEV-SNP. Specifically, WeSee injects interrupt number 29, which delivers a #VC exception to the VM who then executes the corresponding handler that performs data and register copies between the VM and the hypervisor. WeSee shows that using well-crafted #VC injections, the attacker can induce arbitrary behavior in the VM. Our case-studies demonstrate that WeSee can leak sensitive VM information (kTLS keys for NGINX), corrupt kernel data (firewall rules), and inject arbitrary code (launch a root shell from the kernel space).
Related papers
- Devlore: Extending Arm CCA to Integrated Devices A Journey Beyond Memory to Interrupt Isolation [10.221747752230131]
Arm Confidential Computing Architecture executes sensitive computation in an abstraction called realm.
CCA does not allow integrated devices on the platform to access realm.
We present Devlore which allows realm to directly access integrated peripherals.
arXiv Detail & Related papers (2024-08-11T17:33:48Z) - vTensor: Flexible Virtual Tensor Management for Efficient LLM Serving [53.972175896814505]
Large Language Models (LLMs) are widely used across various domains, processing millions of daily requests.
Large Language Models (LLMs) are widely used across various domains, processing millions of daily requests.
arXiv Detail & Related papers (2024-07-22T14:37:58Z) - Cabin: Confining Untrusted Programs within Confidential VMs [13.022056111810599]
Confidential computing safeguards sensitive computations from untrusted clouds.
CVMs often come with large and vulnerable operating system kernels, making them susceptible to attacks exploiting kernel weaknesses.
This study proposes Cabin, an isolated execution framework within guest VM utilizing the latest AMD SEV-SNP technology.
arXiv Detail & Related papers (2024-07-17T06:23:28Z) - SNPGuard: Remote Attestation of SEV-SNP VMs Using Open Source Tools [3.7752830020595796]
Cloud computing is a ubiquitous solution to handle today's complex computing demands.
VM-based Trusted Execution Environments (TEEs) are a promising solution to solve this issue.
They provide strong isolation guarantees to lock out the cloud service provider.
arXiv Detail & Related papers (2024-06-03T10:48:30Z) - Heckler: Breaking Confidential VMs with Malicious Interrupts [2.650561978417805]
Heckler is a new attack wherein the hypervisor injects malicious non-timer interrupts to break the confidentiality and integrity of CVMs.
With AMD SEV-SNP and Intel TDX, we demonstrate Heckler on OpenSSH and to bypass authentication.
arXiv Detail & Related papers (2024-04-04T11:37:59Z) - Formal Security Analysis of the AMD SEV-SNP Software Interface [0.0]
AMD Secure Encrypted technologies enable confidential computing by protecting virtual machines from highly privileged software such as hypervisors.
We develop the first, comprehensive symbolic model of the software interface of the latest SEV iteration called SEV Secure Nested Paging (SEV-SNP)
arXiv Detail & Related papers (2024-03-15T13:39:55Z) - Putting a Padlock on Lambda -- Integrating vTPMs into AWS Firecracker [49.1574468325115]
Software services place implicit trust in the cloud provider, without an explicit trust relationship.
There is currently no cloud provider that exposes Trusted Platform Module capabilities.
We improve trust by integrating a virtual TPM device into the Firecracker, originally developed by Amazon Web Services.
arXiv Detail & Related papers (2023-10-05T13:13:55Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.