Comprehensive Kernel Safety in the Spectre Era: Mitigations and Performance Evaluation (Extended Version)
- URL: http://arxiv.org/abs/2411.18094v1
- Date: Wed, 27 Nov 2024 07:06:28 GMT
- Title: Comprehensive Kernel Safety in the Spectre Era: Mitigations and Performance Evaluation (Extended Version)
- Authors: Davide Davoli, Martin Avanzini, Tamara Rezk,
- Abstract summary: We show that layout randomization offers a comparable safety guarantee in a system with memory separation.
We show that kernel safety cannot be restored for attackers capable of using side-channels and speculative execution.
We introduce enforcement mechanisms that can guarantee speculative kernel safety for safe system calls in the Spectre era.
- Score: 2.0436753359071913
- License:
- Abstract: The efficacy of address space layout randomization has been formally demonstrated in a shared-memory model by Abadi et al., contingent on specific assumptions about victim programs. However, modern operating systems, implementing layout randomization in the kernel, diverge from these assumptions and operate on a separate memory model with communication through system calls. In this work, we relax Abadi et al.'s language assumptions while demonstrating that layout randomization offers a comparable safety guarantee in a system with memory separation. However, in practice, speculative execution and side-channels are recognized threats to layout randomization. We show that kernel safety cannot be restored for attackers capable of using side-channels and speculative execution, and introduce enforcement mechanisms that can guarantee speculative kernel safety for safe system calls in the Spectre era. We implement two suitable mechanisms and we use them to compile the Linux kernel in order to evaluate their performance overhead.
Related papers
- The Illusion of Randomness: An Empirical Analysis of Address Space Layout Randomization Implementations [4.939948478457799]
Real-world implementations of Address Space Layout Randomization are imperfect and subject to weaknesses that attackers can exploit.
This work evaluates the effectiveness of ASLR on major desktop platforms, including Linux, and Windows.
We find a significant entropy reduction in the entropy of libraries after the Linux 5.18 version and identify correlation paths that an attacker could leverage to reduce exploitation complexity significantly.
arXiv Detail & Related papers (2024-08-27T14:46:04Z) - Jailbreaking as a Reward Misspecification Problem [80.52431374743998]
We propose a novel perspective that attributes this vulnerability to reward misspecification during the alignment process.
We introduce a metric ReGap to quantify the extent of reward misspecification and demonstrate its effectiveness.
We present ReMiss, a system for automated red teaming that generates adversarial prompts in a reward-misspecified space.
arXiv Detail & Related papers (2024-06-20T15:12:27Z) - On Kernel's Safety in the Spectre Era (Extended Version) [2.0436753359071913]
We show that layout randomization offers a comparable safety guarantee in a system with memory separation.
We show that kernel safety cannot be restored for attackers capable of using side-channels and speculative execution.
Our research demonstrates that under this condition, the system remains safe without relying on layout randomization.
arXiv Detail & Related papers (2024-06-11T14:04:58Z) - Citadel: Simple Spectre-Safe Isolation For Real-World Programs That Share Memory [8.414722884952525]
We introduce a new security property we call relaxed microarchitectural isolation (RMI)
RMI allows sensitive programs that are not-constant-time to share memory with an attacker while restricting the information leakage to that of non-speculative execution.
Our end-to-end prototype, Citadel, consists of an FPGA-based multicore processor that boots Linux and runs secure applications.
arXiv Detail & Related papers (2023-06-26T17:51:23Z) - SafeDiffuser: Safe Planning with Diffusion Probabilistic Models [97.80042457099718]
Diffusion model-based approaches have shown promise in data-driven planning, but there are no safety guarantees.
We propose a new method, called SafeDiffuser, to ensure diffusion probabilistic models satisfy specifications.
We test our method on a series of safe planning tasks, including maze path generation, legged robot locomotion, and 3D space manipulation.
arXiv Detail & Related papers (2023-05-31T19:38:12Z) - DISCO: Adversarial Defense with Local Implicit Functions [79.39156814887133]
A novel aDversarIal defenSe with local impliCit functiOns is proposed to remove adversarial perturbations by localized manifold projections.
DISCO consumes an adversarial image and a query pixel location and outputs a clean RGB value at the location.
arXiv Detail & Related papers (2022-12-11T23:54:26Z) - Meta-Learning Hypothesis Spaces for Sequential Decision-making [79.73213540203389]
We propose to meta-learn a kernel from offline data (Meta-KeL)
Under mild conditions, we guarantee that our estimated RKHS yields valid confidence sets.
We also empirically evaluate the effectiveness of our approach on a Bayesian optimization task.
arXiv Detail & Related papers (2022-02-01T17:46:51Z) - Are We There Yet? Timing and Floating-Point Attacks on Differential Privacy Systems [18.396937775602808]
We study two implementation flaws in the noise generation commonly used in differentially private (DP) systems.
First we examine the Gaussian mechanism's susceptibility to a floating-point representation attack.
Second we study discrete counterparts of the Laplace and Gaussian mechanisms that suffer from another side channel: a novel timing attack.
arXiv Detail & Related papers (2021-12-10T02:57:01Z) - Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory [75.65949969000596]
Episodic and semantic memory are critical components of the human memory model.
We develop a new principled Bayesian memory allocation scheme that bridges the gap between episodic and semantic memory.
We demonstrate that this allocation scheme improves performance in memory conditional image generation.
arXiv Detail & Related papers (2021-02-20T18:40:40Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.