Comprehensive Kernel Safety in the Spectre Era: Mitigations and Performance Evaluation (Extended Version)
- URL: http://arxiv.org/abs/2411.18094v2
- Date: Fri, 11 Apr 2025 09:19:35 GMT
- Title: Comprehensive Kernel Safety in the Spectre Era: Mitigations and Performance Evaluation (Extended Version)
- Authors: Davide Davoli, Martin Avanzini, Tamara Rezk,
- Abstract summary: We show that layout randomization offers a comparable safety guarantee in a system with memory separation.<n>In practice, speculative execution and side-channels are recognized threats to layout randomization.<n>We show that kernel safety cannot be restored for attackers capable of using side-channels and speculative execution.
- Score: 2.0436753359071913
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The efficacy of address space layout randomization has been formally demonstrated in a shared-memory model by Abadi et al., contingent on specific assumptions about victim programs. However, modern operating systems, implementing layout randomization in the kernel, diverge from these assumptions and operate on a separate memory model with communication through system calls. In this work, we relax Abadi et al.'s language assumptions while demonstrating that layout randomization offers a comparable safety guarantee in a system with memory separation. However, in practice, speculative execution and side-channels are recognized threats to layout randomization. We show that kernel safety cannot be restored for attackers capable of using side-channels and speculative execution, and introduce enforcement mechanisms that can guarantee speculative kernel safety for safe system calls in the Spectre era. We implement three suitable mechanisms and we evaluate their performance overhead on the Linux kernel.
Related papers
- The Illusion of Randomness: An Empirical Analysis of Address Space Layout Randomization Implementations [4.939948478457799]
Real-world implementations of Address Space Layout Randomization are imperfect and subject to weaknesses that attackers can exploit.
This work evaluates the effectiveness of ASLR on major desktop platforms, including Linux, and Windows.
We find a significant entropy reduction in the entropy of libraries after the Linux 5.18 version and identify correlation paths that an attacker could leverage to reduce exploitation complexity significantly.
arXiv Detail & Related papers (2024-08-27T14:46:04Z) - Jailbreaking as a Reward Misspecification Problem [80.52431374743998]
We propose a novel perspective that attributes this vulnerability to reward misspecification during the alignment process.
We introduce a metric ReGap to quantify the extent of reward misspecification and demonstrate its effectiveness.
We present ReMiss, a system for automated red teaming that generates adversarial prompts in a reward-misspecified space.
arXiv Detail & Related papers (2024-06-20T15:12:27Z) - On Kernel's Safety in the Spectre Era (Extended Version) [2.0436753359071913]
We show that layout randomization offers a comparable safety guarantee in a system with memory separation.
We show that kernel safety cannot be restored for attackers capable of using side-channels and speculative execution.
Our research demonstrates that under this condition, the system remains safe without relying on layout randomization.
arXiv Detail & Related papers (2024-06-11T14:04:58Z) - Securing Monolithic Kernels using Compartmentalization [0.9236074230806581]
A single flaw in a non-essential part of the kernel can cause the entire operating system to fall under an attacker's control.
Kernel hardening techniques might prevent certain types of vulnerabilities, but they fail to address a fundamental weakness.
We propose a taxonomy that allows the community to compare and discuss future work.
arXiv Detail & Related papers (2024-04-12T04:55:13Z) - Certifying LLM Safety against Adversarial Prompting [75.19953634352258]
Large language models (LLMs) are vulnerable to adversarial attacks that add malicious tokens to an input prompt.
We introduce erase-and-check, the first framework for defending against adversarial prompts with certifiable safety guarantees.
arXiv Detail & Related papers (2023-09-06T04:37:20Z) - Citadel: Real-World Hardware-Software Contracts for Secure Enclaves Through Microarchitectural Isolation and Controlled Speculation [8.414722884952525]
Hardware isolation primitives such as secure enclaves aim to protect programs, but remain vulnerable to transient execution attacks.
This paper advocates for processors to incorporate microarchitectural isolation primitives and mechanisms for controlled speculation.
We introduce two mechanisms to securely share memory between an enclave and an untrusted OS in an out-of-order processor.
arXiv Detail & Related papers (2023-06-26T17:51:23Z) - SafeDiffuser: Safe Planning with Diffusion Probabilistic Models [97.80042457099718]
Diffusion model-based approaches have shown promise in data-driven planning, but there are no safety guarantees.
We propose a new method, called SafeDiffuser, to ensure diffusion probabilistic models satisfy specifications.
We test our method on a series of safe planning tasks, including maze path generation, legged robot locomotion, and 3D space manipulation.
arXiv Detail & Related papers (2023-05-31T19:38:12Z) - DISCO: Adversarial Defense with Local Implicit Functions [79.39156814887133]
A novel aDversarIal defenSe with local impliCit functiOns is proposed to remove adversarial perturbations by localized manifold projections.
DISCO consumes an adversarial image and a query pixel location and outputs a clean RGB value at the location.
arXiv Detail & Related papers (2022-12-11T23:54:26Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Meta-Learning Hypothesis Spaces for Sequential Decision-making [79.73213540203389]
We propose to meta-learn a kernel from offline data (Meta-KeL)
Under mild conditions, we guarantee that our estimated RKHS yields valid confidence sets.
We also empirically evaluate the effectiveness of our approach on a Bayesian optimization task.
arXiv Detail & Related papers (2022-02-01T17:46:51Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.