QuanShield: Protecting against Side-Channels Attacks using Self-Destructing Enclaves
- URL: http://arxiv.org/abs/2312.11796v1
- Date: Tue, 19 Dec 2023 02:11:42 GMT
- Title: QuanShield: Protecting against Side-Channels Attacks using Self-Destructing Enclaves
- Authors: Shujie Cui, Haohua Li, Yuanhong Li, Zhi Zhang, LluĂs Vilanova, Peter Pietzuch,
- Abstract summary: We propose QuanShield, a system that protects enclaves from side-channel attacks that interrupt enclave execution.
QuanShield creates an interrupt-free environment on a dedicated CPU core for running enclaves in which enclaves terminate when interrupts occur.
Our evaluation shows that QuanShield significantly raises the bar for interrupt-based attacks with practical overhead.
- Score: 4.852855468869516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trusted Execution Environments (TEEs) allow user processes to create enclaves that protect security-sensitive computation against access from the OS kernel and the hypervisor. Recent work has shown that TEEs are vulnerable to side-channel attacks that allow an adversary to learn secrets shielded in enclaves. The majority of such attacks trigger exceptions or interrupts to trace the control or data flow of enclave execution. We propose QuanShield, a system that protects enclaves from side-channel attacks that interrupt enclave execution. The main idea behind QuanShield is to strengthen resource isolation by creating an interrupt-free environment on a dedicated CPU core for running enclaves in which enclaves terminate when interrupts occur. QuanShield avoids interrupts by exploiting the tickless scheduling mode supported by recent OS kernels. QuanShield then uses the save area (SA) of the enclave, which is used by the hardware to support interrupt handling, as a second stack. Through an LLVM-based compiler pass, QuanShield modifies enclave instructions to store/load memory references, such as function frame base addresses, to/from the SA. When an interrupt occurs, the hardware overwrites the data in the SA with CPU state, thus ensuring that enclave execution fails. Our evaluation shows that QuanShield significantly raises the bar for interrupt-based attacks with practical overhead.
Related papers
- SIGY: Breaking Intel SGX Enclaves with Malicious Exceptions & Signals [2.8436446946726557]
7 runtimes and library OSes (OpenEnclave, Gramine, Scone, Asylo, Teaclave, Occlum, EnclaveOS) are vulnerable to SIGY.
We present SIGY attack, which abuses this programming model on Intel SGX to break the confidentiality and integrity guarantees of enclaves.
arXiv Detail & Related papers (2024-04-22T09:02:24Z) - Heckler: Breaking Confidential VMs with Malicious Interrupts [2.650561978417805]
Heckler is a new attack wherein the hypervisor injects malicious non-timer interrupts to break the confidentiality and integrity of CVMs.
With AMD SEV-SNP and Intel TDX, we demonstrate Heckler on OpenSSH and to bypass authentication.
arXiv Detail & Related papers (2024-04-04T11:37:59Z) - Towards Practical Fabrication Stage Attacks Using Interrupt-Resilient Hardware Trojans [4.549209593575401]
We introduce a new class of hardware trojans called interrupt-resilient trojans (IRTs)
IRTs can successfully address the problem of non-deterministic triggering in CPUs.
We show that our design allows for seamless integration during fabrication stage attacks.
arXiv Detail & Related papers (2024-03-15T19:55:23Z) - AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting [54.931241667414184]
We propose textbfAdaptive textbfShield Prompting, which prepends inputs with defense prompts to defend MLLMs against structure-based jailbreak attacks.
Our methods can consistently improve MLLMs' robustness against structure-based jailbreak attacks.
arXiv Detail & Related papers (2024-03-14T15:57:13Z) - Defending Large Language Models against Jailbreak Attacks via Semantic
Smoothing [107.97160023681184]
Aligned large language models (LLMs) are vulnerable to jailbreaking attacks.
We propose SEMANTICSMOOTH, a smoothing-based defense that aggregates predictions of semantically transformed copies of a given input prompt.
arXiv Detail & Related papers (2024-02-25T20:36:03Z) - HasTEE+ : Confidential Cloud Computing and Analytics with Haskell [50.994023665559496]
Confidential computing enables the protection of confidential code and data in a co-tenanted cloud deployment using specialized hardware isolation units called Trusted Execution Environments (TEEs)
TEEs offer low-level C/C++-based toolchains that are susceptible to inherent memory safety vulnerabilities and lack language constructs to monitor explicit and implicit information-flow leaks.
We address the above with HasTEE+, a domain-specific language (cla) embedded in Haskell that enables programming TEEs in a high-level language with strong type-safety.
arXiv Detail & Related papers (2024-01-17T00:56:23Z) - TGh: A TEE/GC Hybrid Enabling Confidential FaaS Platforms [0.0]
We propose a TEE/GC hybrid protocol to enable confidential F platforms.
Our approach retains the security guarantees of enclaves while avoiding the performance issues associated with enclave management instructions.
arXiv Detail & Related papers (2023-09-14T14:51:38Z) - Citadel: Real-World Hardware-Software Contracts for Secure Enclaves Through Microarchitectural Isolation and Controlled Speculation [8.414722884952525]
Hardware isolation primitives such as secure enclaves aim to protect programs, but remain vulnerable to transient execution attacks.
This paper advocates for processors to incorporate microarchitectural isolation primitives and mechanisms for controlled speculation.
We introduce two mechanisms to securely share memory between an enclave and an untrusted OS in an out-of-order processor.
arXiv Detail & Related papers (2023-06-26T17:51:23Z) - Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger [48.59965356276387]
We propose to use syntactic structure as the trigger in textual backdoor attacks.
We conduct extensive experiments to demonstrate that the trigger-based attack method can achieve comparable attack performance.
These results also reveal the significant insidiousness and harmfulness of textual backdoor attacks.
arXiv Detail & Related papers (2021-05-26T08:54:19Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.