Is unsafe an Achilles' Heel? A Comprehensive Study of Safety
Requirements in Unsafe Rust Programming
- URL: http://arxiv.org/abs/2308.04785v1
- Date: Wed, 9 Aug 2023 08:16:10 GMT
- Title: Is unsafe an Achilles' Heel? A Comprehensive Study of Safety
Requirements in Unsafe Rust Programming
- Authors: Mohan Cui, Suran Sun, Hui Xu, Yangfan Zhou
- Abstract summary: Rust is an emerging, strongly-typed programming language focusing on efficiency and memory safety.
Current unsafe API documents in the standard library exhibited variations, including inconsistency and insufficiency.
To enhance Rust security, we suggest unsafe API documents to list systematic descriptions of safety requirements for users to follow.
- Score: 4.981203415693332
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rust is an emerging, strongly-typed programming language focusing on
efficiency and memory safety. With increasing projects adopting Rust, knowing
how to use Unsafe Rust is crucial for Rust security. We observed that the
description of safety requirements needs to be unified in Unsafe Rust
programming. Current unsafe API documents in the standard library exhibited
variations, including inconsistency and insufficiency. To enhance Rust
security, we suggest unsafe API documents to list systematic descriptions of
safety requirements for users to follow. In this paper, we conducted the first
comprehensive empirical study on safety requirements across unsafe boundaries.
We studied unsafe API documents in the standard library and defined 19 safety
properties (SP). We then completed the data labeling on 416 unsafe APIs while
analyzing their correlation to find interpretable results. To validate the
practical usability and SP coverage, we categorized existing Rust CVEs until
2023-07-08 and performed a statistical analysis of std unsafe API usage toward
the crates.io ecosystem. In addition, we conducted a user survey to gain
insights into four aspects from experienced Rust programmers. We finally
received 50 valid responses and confirmed our classification with statistical
significance.
Related papers
- What Makes and Breaks Safety Fine-tuning? A Mechanistic Study [64.9691741899956]
Safety fine-tuning helps align Large Language Models (LLMs) with human preferences for their safe deployment.
We design a synthetic data generation framework that captures salient aspects of an unsafe input.
Using this, we investigate three well-known safety fine-tuning methods.
arXiv Detail & Related papers (2024-07-14T16:12:57Z) - Characterizing Unsafe Code Encapsulation In Real-world Rust Systems [2.285834282327349]
Interior unsafe is an essential design paradigm advocated by the Rust community in system software development.
The Rust compiler is incapable of verifying the soundness of a safe function containing unsafe code.
We propose a novel unsafety isolation graph to model the essential usage and encapsulation of unsafe code.
arXiv Detail & Related papers (2024-06-12T06:59:51Z) - Bringing Rust to Safety-Critical Systems in Space [1.0742675209112622]
Rust aims to drastically reduce the chance of introducing bugs and produces overall more secure and safer code.
This work provides a set of recommendations for the development of safety-critical space systems in Rust.
arXiv Detail & Related papers (2024-05-28T12:48:47Z) - Fast Summary-based Whole-program Analysis to Identify Unsafe Memory Accesses in Rust [23.0568924498396]
Rust is one of the most promising systems programming languages to solve the memory safety issues that have plagued low-level software for over forty years.
unsafe Rust code and directly-linked unsafe foreign libraries may not only introduce memory safety violations themselves but also compromise the entire program as they run in the same monolithic address space as the safe Rust.
We have prototyped a whole-program analysis for identifying both unsafe heap allocations and memory accesses to those unsafe heap objects.
arXiv Detail & Related papers (2023-10-16T11:34:21Z) - SafetyBench: Evaluating the Safety of Large Language Models [54.878612385780805]
SafetyBench is a comprehensive benchmark for evaluating the safety of Large Language Models (LLMs)
It comprises 11,435 diverse multiple choice questions spanning across 7 distinct categories of safety concerns.
Our tests over 25 popular Chinese and English LLMs in both zero-shot and few-shot settings reveal a substantial performance advantage for GPT-4 over its counterparts.
arXiv Detail & Related papers (2023-09-13T15:56:50Z) - A Closer Look at the Security Risks in the Rust Ecosystem [0.0]
Rust is an emerging programming language designed for the development of systems software.
To facilitate the reuse of Rust code, crates.io, as a central package registry of the Rust ecosystem, hosts thousands of third-party Rust packages.
The openness of crates.io enables the growth of the Rust ecosystem but comes with security risks by severe security advisories.
arXiv Detail & Related papers (2023-08-29T06:05:25Z) - Safe Deep Reinforcement Learning by Verifying Task-Level Properties [84.64203221849648]
Cost functions are commonly employed in Safe Deep Reinforcement Learning (DRL)
The cost is typically encoded as an indicator function due to the difficulty of quantifying the risk of policy decisions in the state space.
In this paper, we investigate an alternative approach that uses domain knowledge to quantify the risk in the proximity of such states by defining a violation metric.
arXiv Detail & Related papers (2023-02-20T15:24:06Z) - Online Safety Property Collection and Refinement for Safe Deep
Reinforcement Learning in Mapless Navigation [79.89605349842569]
We introduce the Collection and Refinement of Online Properties (CROP) framework to design properties at training time.
CROP employs a cost signal to identify unsafe interactions and use them to shape safety properties.
We evaluate our approach in several robotic mapless navigation tasks and demonstrate that the violation metric computed with CROP allows higher returns and lower violations over previous Safe DRL approaches.
arXiv Detail & Related papers (2023-02-13T21:19:36Z) - Unsafe's Betrayal: Abusing Unsafe Rust in Binary Reverse Engineering
toward Finding Memory-safety Bugs via Machine Learning [20.68333298047064]
Rust provides memory-safe mechanisms to avoid memory-safety bugs in programming.
Unsafe code that enhances the usability of Rust provides clear spots for finding memory-safety bugs.
We claim that these unsafe spots can still be identifiable in Rust binary code via machine learning.
arXiv Detail & Related papers (2022-10-31T19:32:18Z) - Provable Safe Reinforcement Learning with Binary Feedback [62.257383728544006]
We consider the problem of provable safe RL when given access to an offline oracle providing binary feedback on the safety of state, action pairs.
We provide a novel meta algorithm, SABRE, which can be applied to any MDP setting given access to a blackbox PAC RL algorithm for that setting.
arXiv Detail & Related papers (2022-10-26T05:37:51Z) - SafeText: A Benchmark for Exploring Physical Safety in Language Models [62.810902375154136]
We study commonsense physical safety across various models designed for text generation and commonsense reasoning tasks.
We find that state-of-the-art large language models are susceptible to the generation of unsafe text and have difficulty rejecting unsafe advice.
arXiv Detail & Related papers (2022-10-18T17:59:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.