"Against the Void": An Interview and Survey Study on How Rust Developers Use Unsafe Code
- URL: http://arxiv.org/abs/2404.02230v2
- Date: Wed, 17 Apr 2024 18:15:58 GMT
- Title: "Against the Void": An Interview and Survey Study on How Rust Developers Use Unsafe Code
- Authors: Ian McCormack, Tomas Dougan, Sam Estep, Hanan Hibshi, Jonathan Aldrich, Joshua Sunshine,
- Abstract summary: Rust provides its safety guarantees by restricting aliasing and mutability.
Key design patterns, such as cyclic aliasing and multi-operation, must bypass these restrictions.
- Score: 2.2463451968497425
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Rust programming language is an increasingly popular choice for systems programming, since it can statically guarantee memory safety without automatic garbage collection. Rust provides its safety guarantees by restricting aliasing and mutability, but many key design patterns, such as cyclic aliasing and multi-language interoperation, must bypass these restrictions. Rust's $\texttt{unsafe}$ keyword enables features that developers can use to implement these patterns, and the Rust ecosystem includes useful tools for validating whether $\texttt{unsafe}$ code is used correctly. However, it is unclear if these tools are adequate for all use cases. To understand developers' needs, we conducted a mixed-methods study consisting of semi-structured interviews followed by a survey. We interviewed 19 Rust developers and surveyed 160 developers$\unicode{x2013}$all of whom engaged with $\texttt{unsafe}$ code. We found that 77% of survey respondents and a majority of interview participants were motivated to use $\texttt{unsafe}$ code because they were unaware of a safe alternative. Developers typically followed best-practices such as minimizing and localizing their use of $\texttt{unsafe}$ code, but only 23% were always certain that their encapsulations were sound. Limited tooling support for inline assembly and foreign function calls prevented developers from validating $\texttt{unsafe}$ code, and differences between Rust and other languages made foreign functions difficult to encapsulate. Verification tools were underused, and developers rarely audited their dependencies. Our results indicate a pressing need for production-ready tools that can validate the most frequently used $\texttt{unsafe}$ features.
Related papers
- Characterizing Unsafe Code Encapsulation In Real-world Rust Systems [2.285834282327349]
Interior unsafe is an essential design paradigm advocated by the Rust community in system software development.
The Rust compiler is incapable of verifying the soundness of a safe function containing unsafe code.
We propose a novel unsafety isolation graph to model the essential usage and encapsulation of unsafe code.
arXiv Detail & Related papers (2024-06-12T06:59:51Z) - VERT: Verified Equivalent Rust Transpilation with Large Language Models as Few-Shot Learners [6.824327908701066]
Rust is a programming language that combines memory safety and low-level control, providing C-like performance.
Existing work falls into two categories: rule-based and large language model (LLM)-based.
We present VERT, a tool that can produce readable Rust transpilations with formal guarantees of correctness.
arXiv Detail & Related papers (2024-04-29T16:45:03Z) - A Study of Undefined Behavior Across Foreign Function Boundaries in Rust Libraries [2.359557447960552]
Rust is frequently used to interoperate with languages that have far weaker restrictions.
We created MiriLLI, a tool which uses existing Rust and LLVM interpreters to jointly execute multi-language applications.
arXiv Detail & Related papers (2024-04-17T18:12:05Z) - ToolSword: Unveiling Safety Issues of Large Language Models in Tool
Learning Across Three Stages [46.86723087688694]
Tool learning is widely acknowledged as a foundational approach or deploying large language models (LLMs) in real-world scenarios.
$ToolSword$ is a framework dedicated to investigating safety issues linked to LLMs in tool learning.
Experiments conducted on 11 open-source and closed-source LLMs reveal enduring safety challenges in tool learning.
arXiv Detail & Related papers (2024-02-16T15:19:46Z) - All Languages Matter: On the Multilingual Safety of Large Language Models [96.47607891042523]
We build the first multilingual safety benchmark for large language models (LLMs)
XSafety covers 14 kinds of commonly used safety issues across 10 languages that span several language families.
We propose several simple and effective prompting methods to improve the multilingual safety of ChatGPT.
arXiv Detail & Related papers (2023-10-02T05:23:34Z) - Fixing Rust Compilation Errors using LLMs [2.1781086368581932]
The Rust programming language has established itself as a viable choice for low-level systems programming language over the traditional, unsafe alternatives like C/C++.
This paper presents a tool called RustAssistant that leverages the emergent capabilities of Large Language Models (LLMs) to automatically suggest fixes for Rust compilation errors.
RustAssistant is able to achieve an impressive peak accuracy of roughly 74% on real-world compilation errors in popular open-source Rust repositories.
arXiv Detail & Related papers (2023-08-09T18:30:27Z) - Is unsafe an Achilles' Heel? A Comprehensive Study of Safety
Requirements in Unsafe Rust Programming [4.981203415693332]
Rust is an emerging, strongly-typed programming language focusing on efficiency and memory safety.
Current unsafe API documents in the standard library exhibited variations, including inconsistency and insufficiency.
To enhance Rust security, we suggest unsafe API documents to list systematic descriptions of safety requirements for users to follow.
arXiv Detail & Related papers (2023-08-09T08:16:10Z) - A Static Evaluation of Code Completion by Large Language Models [65.18008807383816]
Execution-based benchmarks have been proposed to evaluate functional correctness of model-generated code on simple programming problems.
static analysis tools such as linters, which can detect errors without running the program, haven't been well explored for evaluating code generation models.
We propose a static evaluation framework to quantify static errors in Python code completions, by leveraging Abstract Syntax Trees.
arXiv Detail & Related papers (2023-06-05T19:23:34Z) - Interactive Code Generation via Test-Driven User-Intent Formalization [60.90035204567797]
Large language models (LLMs) produce code from informal natural language (NL) intent.
It is hard to define a notion of correctness since natural language can be ambiguous and lacks a formal semantics.
We describe a language-agnostic abstract algorithm and a concrete implementation TiCoder.
arXiv Detail & Related papers (2022-08-11T17:41:08Z) - textless-lib: a Library for Textless Spoken Language Processing [50.070693765984075]
We introduce textless-lib, a PyTorch-based library aimed to facilitate research in this research area.
We describe the building blocks that the library provides and demonstrate its usability.
arXiv Detail & Related papers (2022-02-15T12:39:42Z) - RNNs can generate bounded hierarchical languages with optimal memory [113.73133308478612]
We show that RNNs can efficiently generate bounded hierarchical languages that reflect the scaffolding of natural language syntax.
We introduce Dyck-($k$,$m$), the language of well-nested brackets (of $k$ types) and $m$-bounded nesting depth.
We prove that an RNN with $O(m log k)$ hidden units suffices, an exponential reduction in memory, by an explicit construction.
arXiv Detail & Related papers (2020-10-15T04:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.