Friend or Foe Inside? Exploring In-Process Isolation to Maintain Memory Safety for Unsafe Rust
- URL: http://arxiv.org/abs/2306.08127v2
- Date: Wed, 08 Oct 2025 07:10:47 GMT
- Title: Friend or Foe Inside? Exploring In-Process Isolation to Maintain Memory Safety for Unsafe Rust
- Authors: Merve Gülmez, Thomas Nyman, Christoph Baumann, Jan Tobias Mühlberg,
- Abstract summary: Rust provides emphunsafe language features that shift responsibility for ensuring memory safety to the developer.<n>In this work we explore in-process isolation with Memory Protection Keys as a mechanism to shield safe program sections from safety violations.
- Score: 3.284045052514266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rust is a popular memory-safe systems programming language. In order to interact with hardware or call into non-Rust libraries, Rust provides \emph{unsafe} language features that shift responsibility for ensuring memory safety to the developer. Failing to do so, may lead to memory safety violations in unsafe code which can violate safety of the entire application. In this work we explore in-process isolation with Memory Protection Keys as a mechanism to shield safe program sections from safety violations that may happen in unsafe sections. Our approach is easy to use and comprehensive as it prevents heap and stack-based violations. We further compare process-based and in-process isolation mechanisms and the necessary requirements for data serialization, communication, and context switching. Our results show that in-process isolation can be effective and efficient, permits for a high degree of automation, and also enables a notion of application rewinding where the safe program section may detect and safely handle violations in unsafe code.
Related papers
- Contextual Safety Reasoning and Grounding for Open-World Robots [79.98924225712668]
CORE is a safety framework that enables online contextual reasoning, grounding, and enforcement without prior knowledge of the environment.<n>We provide probabilistic safety guarantees for CORE that account for perceptual uncertainty.<n>We demonstrate through simulation and real-world experiments that CORE enforces contextually appropriate behavior in unseen environments.
arXiv Detail & Related papers (2026-02-23T15:51:23Z) - RoboSafe: Safeguarding Embodied Agents via Executable Safety Logic [56.38397499463889]
Embodied agents powered by vision-language models (VLMs) are increasingly capable of executing complex real-world tasks.<n>However, they remain vulnerable to hazardous instructions that may trigger unsafe behaviors.<n>We propose RoboSafe, a runtime safeguard for embodied agents through executable predicate-based safety logic.
arXiv Detail & Related papers (2025-12-24T15:01:26Z) - SafeFFI: Efficient Sanitization at the Boundary Between Safe and Unsafe Code in Rust and Mixed-Language Applications [5.578413517654703]
Unsafe Rust code is necessary for interoperability with C/C++ libraries and implementing low-level data structures.<n>Sanitizers can catch such memory errors at runtime, but introduce many unnecessary checks even for memory accesses guaranteed safe by the Rust type system.<n>We introduce SafeFFI, a system for optimizing memory safety instrumentation in Rust binaries.
arXiv Detail & Related papers (2025-10-23T16:02:45Z) - SandCell: Sandboxing Rust Beyond Unsafe Code [14.279471205248532]
Rust is a modern systems programming language that ensures memory safety by enforcing ownership and borrowing rules at compile time.<n>Various approaches for isolating unsafe code to protect safe Rust from vulnerabilities have been proposed.<n>This paper presents SandCell for flexible and lightweight isolation in Rust by leveraging existing syntactic boundaries.
arXiv Detail & Related papers (2025-09-28T19:01:51Z) - CRUST-Bench: A Comprehensive Benchmark for C-to-safe-Rust Transpilation [51.18863297461463]
CRUST-Bench is a dataset of 100 C repositories, each paired with manually-written interfaces in safe Rust as well as test cases.<n>We evaluate state-of-the-art large language models (LLMs) on this task and find that safe and idiomatic Rust generation is still a challenging problem.<n>The best performing model, OpenAI o1, is able to solve only 15 tasks in a single-shot setting.
arXiv Detail & Related papers (2025-04-21T17:33:33Z) - SafeSwitch: Steering Unsafe LLM Behavior via Internal Activation Signals [51.49737867797442]
Large language models (LLMs) exhibit exceptional capabilities across various tasks but also pose risks by generating harmful content.<n>We show that LLMs can similarly perform internal assessments about safety in their internal states.<n>We propose SafeSwitch, a framework that regulates unsafe outputs by utilizing the prober-based internal state monitor.
arXiv Detail & Related papers (2025-02-03T04:23:33Z) - Characterizing Unsafe Code Encapsulation In Real-world Rust Systems [2.285834282327349]
Interior unsafe is an essential design paradigm advocated by the Rust community in system software development.
The Rust compiler is incapable of verifying the soundness of a safe function containing unsafe code.
We propose a novel unsafety isolation graph to model the essential usage and encapsulation of unsafe code.
arXiv Detail & Related papers (2024-06-12T06:59:51Z) - Fast Summary-based Whole-program Analysis to Identify Unsafe Memory Accesses in Rust [23.0568924498396]
Rust is one of the most promising systems programming languages to solve the memory safety issues that have plagued low-level software for over forty years.
unsafe Rust code and directly-linked unsafe foreign libraries may not only introduce memory safety violations themselves but also compromise the entire program as they run in the same monolithic address space as the safe Rust.
We have prototyped a whole-program analysis for identifying both unsafe heap allocations and memory accesses to those unsafe heap objects.
arXiv Detail & Related papers (2023-10-16T11:34:21Z) - Unsafe's Betrayal: Abusing Unsafe Rust in Binary Reverse Engineering
toward Finding Memory-safety Bugs via Machine Learning [20.68333298047064]
Rust provides memory-safe mechanisms to avoid memory-safety bugs in programming.
Unsafe code that enhances the usability of Rust provides clear spots for finding memory-safety bugs.
We claim that these unsafe spots can still be identifiable in Rust binary code via machine learning.
arXiv Detail & Related papers (2022-10-31T19:32:18Z) - Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement
Learning in Unknown Stochastic Environments [84.3830478851369]
We propose a safe reinforcement learning approach that can jointly learn the environment and optimize the control policy.
Our approach can effectively enforce hard safety constraints and significantly outperform CMDP-based baseline methods in system safe rate measured via simulations.
arXiv Detail & Related papers (2022-09-29T20:49:25Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.