WeSee: Using Malicious #VC Interrupts to Break AMD SEV-SNP
- URL: http://arxiv.org/abs/2404.03526v1
- Date: Thu, 4 Apr 2024 15:30:13 GMT
- Title: WeSee: Using Malicious #VC Interrupts to Break AMD SEV-SNP
- Authors: Benedict Schlüter, Supraja Sridhara, Andrin Bertschi, Shweta Shinde,
- Abstract summary: AMD SEV-SNP offers VM-level trusted execution environments (TEEs) to protect sensitive cloud workloads.
WeSee attack injects malicious #VC into a victim VM's CPU to compromise the security guarantees of AMD SEV-SNP.
Case-studies demonstrate that WeSee can leak sensitive VM information (kTLS keys for NGINX), corrupt kernel data (firewall rules), and inject arbitrary code.
- Score: 2.8436446946726557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AMD SEV-SNP offers VM-level trusted execution environments (TEEs) to protect the confidentiality and integrity for sensitive cloud workloads from untrusted hypervisor controlled by the cloud provider. AMD introduced a new exception, #VC, to facilitate the communication between the VM and the untrusted hypervisor. We present WeSee attack, where the hypervisor injects malicious #VC into a victim VM's CPU to compromise the security guarantees of AMD SEV-SNP. Specifically, WeSee injects interrupt number 29, which delivers a #VC exception to the VM who then executes the corresponding handler that performs data and register copies between the VM and the hypervisor. WeSee shows that using well-crafted #VC injections, the attacker can induce arbitrary behavior in the VM. Our case-studies demonstrate that WeSee can leak sensitive VM information (kTLS keys for NGINX), corrupt kernel data (firewall rules), and inject arbitrary code (launch a root shell from the kernel space).
Related papers
- NecoFuzz: Effective Fuzzing of Nested Virtualization via Fuzz-Harness Virtual Machines [0.7646713951724009]
We present NecoFuzz, the first fuzzing framework that systematically targets nested virtualization-specific logic.<n>We implement NecoFuzz on Intel VT-x and AMD-V by extending AFL++ to support fuzz-harness.
arXiv Detail & Related papers (2025-12-09T17:50:32Z) - BackdoorVLM: A Benchmark for Backdoor Attacks on Vision-Language Models [63.5775877701015]
We introduce textbfBackdoorVLM, the first comprehensive benchmark for evaluating backdoor attacks on vision-language models (VLMs)<n>BackdoorVLM organizes multimodal backdoor threats into 5 representative categories: targeted refusal, malicious injection, jailbreak, concept substitution, and perceptual hijack.<n>We evaluate these threats using 12 representative attack methods spanning text, image, and bimodal triggers, tested on 2 open-source VLMs and 3 multimodal datasets.
arXiv Detail & Related papers (2025-11-24T09:30:38Z) - Cuckoo Attack: Stealthy and Persistent Attacks Against AI-IDE [64.47951172662745]
Cuckoo Attack is a novel attack that achieves stealthy and persistent command execution by embedding malicious payloads into configuration files.<n>We formalize our attack paradigm into two stages, including initial infection and persistence.<n>We contribute seven actionable checkpoints for vendors to evaluate their product security.
arXiv Detail & Related papers (2025-09-19T04:10:52Z) - Screen Hijack: Visual Poisoning of VLM Agents in Mobile Environments [61.808686396077036]
We present GHOST, the first clean-label backdoor attack specifically designed for mobile agents built upon vision-language models (VLMs)<n>Our method manipulates only the visual inputs of a portion of the training samples without altering their corresponding labels or instructions.<n>We evaluate our method across six real-world Android apps and three VLM architectures adapted for mobile use.
arXiv Detail & Related papers (2025-06-16T08:09:32Z) - VPI-Bench: Visual Prompt Injection Attacks for Computer-Use Agents [74.6761188527948]
Computer-Use Agents (CUAs) with full system access pose significant security and privacy risks.<n>We investigate Visual Prompt Injection (VPI) attacks, where malicious instructions are visually embedded within rendered user interfaces.<n>Our empirical study shows that current CUAs and BUAs can be deceived at rates of up to 51% and 100%, respectively, on certain platforms.
arXiv Detail & Related papers (2025-06-03T05:21:50Z) - MIP against Agent: Malicious Image Patches Hijacking Multimodal OS Agents [60.92962583528122]
Recent advances in operating system (OS) agents have enabled vision-language models (VLMs) to directly control a user's computer.<n>We uncover a novel attack vector against these OS agents: Malicious Image Patches (MIPs)<n>MIPs adversarially perturbed screen regions that, when captured by an OS agent, induce it to perform harmful actions by exploiting specific APIs.
arXiv Detail & Related papers (2025-03-13T18:59:12Z) - VMGuard: Reputation-Based Incentive Mechanism for Poisoning Attack Detection in Vehicular Metaverse [52.57251742991769]
vehicular Metaverse guard (VMGuard) protects vehicular Metaverse systems from data poisoning attacks.
VMGuard implements a reputation-based incentive mechanism to assess the trustworthiness of participating SIoT devices.
Our system ensures that reliable SIoT devices, previously missclassified, are not barred from participating in future rounds of the market.
arXiv Detail & Related papers (2024-12-05T17:08:20Z) - Devlore: Extending Arm CCA to Integrated Devices A Journey Beyond Memory to Interrupt Isolation [10.221747752230131]
Arm Confidential Computing Architecture executes sensitive computation in an abstraction called realm.
CCA does not allow integrated devices on the platform to access realm.
We present Devlore which allows realm to directly access integrated peripherals.
arXiv Detail & Related papers (2024-08-11T17:33:48Z) - vTensor: Flexible Virtual Tensor Management for Efficient LLM Serving [53.972175896814505]
Large Language Models (LLMs) are widely used across various domains, processing millions of daily requests.
Large Language Models (LLMs) are widely used across various domains, processing millions of daily requests.
arXiv Detail & Related papers (2024-07-22T14:37:58Z) - Cabin: Confining Untrusted Programs within Confidential VMs [13.022056111810599]
Confidential computing safeguards sensitive computations from untrusted clouds.
CVMs often come with large and vulnerable operating system kernels, making them susceptible to attacks exploiting kernel weaknesses.
This study proposes Cabin, an isolated execution framework within guest VM utilizing the latest AMD SEV-SNP technology.
arXiv Detail & Related papers (2024-07-17T06:23:28Z) - SNPGuard: Remote Attestation of SEV-SNP VMs Using Open Source Tools [3.7752830020595796]
Cloud computing is a ubiquitous solution to handle today's complex computing demands.
VM-based Trusted Execution Environments (TEEs) are a promising solution to solve this issue.
They provide strong isolation guarantees to lock out the cloud service provider.
arXiv Detail & Related papers (2024-06-03T10:48:30Z) - Heckler: Breaking Confidential VMs with Malicious Interrupts [2.650561978417805]
Heckler is a new attack wherein the hypervisor injects malicious non-timer interrupts to break the confidentiality and integrity of CVMs.
With AMD SEV-SNP and Intel TDX, we demonstrate Heckler on OpenSSH and to bypass authentication.
arXiv Detail & Related papers (2024-04-04T11:37:59Z) - Formal Security Analysis of the AMD SEV-SNP Software Interface [0.0]
AMD Secure Encrypted technologies enable confidential computing by protecting virtual machines from highly privileged software such as hypervisors.
We develop the first, comprehensive symbolic model of the software interface of the latest SEV iteration called SEV Secure Nested Paging (SEV-SNP)
arXiv Detail & Related papers (2024-03-15T13:39:55Z) - Putting a Padlock on Lambda -- Integrating vTPMs into AWS Firecracker [49.1574468325115]
Software services place implicit trust in the cloud provider, without an explicit trust relationship.
There is currently no cloud provider that exposes Trusted Platform Module capabilities.
We improve trust by integrating a virtual TPM device into the Firecracker, originally developed by Amazon Web Services.
arXiv Detail & Related papers (2023-10-05T13:13:55Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.