rCanary: Detecting Memory Leaks Across Semi-automated Memory Management
Boundary in Rust
- URL: http://arxiv.org/abs/2308.04787v1
- Date: Wed, 9 Aug 2023 08:26:04 GMT
- Title: rCanary: Detecting Memory Leaks Across Semi-automated Memory Management
Boundary in Rust
- Authors: Mohan Cui, Suran Sun, Hui Xu, Yangfan Zhou
- Abstract summary: Rust is a system programming language that guarantees memory safety via compile-time verifications.
It employs a novel ownership-based resource management model to facilitate automated resource deallocation.
We present rCanary, a non-intrusive, and fully automated model checker to detect leaks across the semi-automated boundary.
- Score: 4.981203415693332
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rust is an effective system programming language that guarantees memory
safety via compile-time verifications. It employs a novel ownership-based
resource management model to facilitate automated resource deallocation. It is
anticipated that this model will eliminate memory leaks. However, we observed
that user intervention driving semi-automated management is prone to
introducing leaks. In contrast to violating memory-safety guarantees via the
unsafe keyword, the leak breached boundary is implicit with no compiler
alerting. In this paper, we present rCanary, a static, non-intrusive, and fully
automated model checker to detect leaks across the semi-automated boundary. It
adopts a precise encoder to abstract data with heap allocation and formalizes a
refined leak-free memory model based on Boolean satisfiability. rCanary is
implemented as an external component of Cargo and can generate constraints via
MIR data flow. We evaluate it using flawed package benchmarks collected from
the pull requests of prominent Rust packages. The results indicate it is
possible to recall all these defects with acceptable false positives. We also
apply our tool to more than 1,200 real-world crates from crates.io and GitHub,
identifying 19 crates with potentially vulnerable leaks in 8.4 seconds per
package.
Related papers
- Automated Repair of AI Code with Large Language Models and Formal Verification [4.9975496263385875]
Next generation of AI systems requires strong safety guarantees.
This report looks at the software implementation of neural networks and related memory safety properties.
We detect these vulnerabilities, and automatically repair them with the help of large language models.
arXiv Detail & Related papers (2024-05-14T11:52:56Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - Fast Summary-based Whole-program Analysis to Identify Unsafe Memory Accesses in Rust [23.0568924498396]
Rust is one of the most promising systems programming languages to solve the memory safety issues that have plagued low-level software for over forty years.
unsafe Rust code and directly-linked unsafe foreign libraries may not only introduce memory safety violations themselves but also compromise the entire program as they run in the same monolithic address space as the safe Rust.
We have prototyped a whole-program analysis for identifying both unsafe heap allocations and memory accesses to those unsafe heap objects.
arXiv Detail & Related papers (2023-10-16T11:34:21Z) - Yuga: Automatically Detecting Lifetime Annotation Bugs in the Rust
Language [16.56604332678692]
Security vulnerabilities have been reported in Rust projects, often attributed to the use of "unsafe" Rust code.
These vulnerabilities, in part, arise from incorrect lifetime annotations on function signatures.
Existing tools fail to detect these bugs, primarily because such bugs are rare, challenging to detect through dynamic analysis.
We devise a novel static analysis tool, Yuga, to detect potential lifetime annotation bugs.
arXiv Detail & Related papers (2023-10-12T17:05:03Z) - LeakPair: Proactive Repairing of Memory Leaks in Single Page Web
Applications [1.9757735090956159]
LeakPair is a technique to repair memory leaks in single page applications.
We evaluate the technique on more than 20 open-source projects without using explicit leak detection.
arXiv Detail & Related papers (2023-08-16T04:36:41Z) - A Multiplicative Value Function for Safe and Efficient Reinforcement
Learning [131.96501469927733]
We propose a safe model-free RL algorithm with a novel multiplicative value function consisting of a safety critic and a reward critic.
The safety critic predicts the probability of constraint violation and discounts the reward critic that only estimates constraint-free returns.
We evaluate our method in four safety-focused environments, including classical RL benchmarks augmented with safety constraints and robot navigation tasks with images and raw Lidar scans as observations.
arXiv Detail & Related papers (2023-03-07T18:29:15Z) - Integral Continual Learning Along the Tangent Vector Field of Tasks [112.02761912526734]
We propose a lightweight continual learning method which incorporates information from specialized datasets incrementally.
It maintains a small fixed-size memory buffer, as low as 0.4% of the source datasets, which is updated by simple resampling.
Our method achieves strong performance across various buffer sizes for different datasets.
arXiv Detail & Related papers (2022-11-23T16:49:26Z) - Unsafe's Betrayal: Abusing Unsafe Rust in Binary Reverse Engineering
toward Finding Memory-safety Bugs via Machine Learning [20.68333298047064]
Rust provides memory-safe mechanisms to avoid memory-safety bugs in programming.
Unsafe code that enhances the usability of Rust provides clear spots for finding memory-safety bugs.
We claim that these unsafe spots can still be identifiable in Rust binary code via machine learning.
arXiv Detail & Related papers (2022-10-31T19:32:18Z) - Recurrent Dynamic Embedding for Video Object Segmentation [54.52527157232795]
We propose a Recurrent Dynamic Embedding (RDE) to build a memory bank of constant size.
We propose an unbiased guidance loss during the training stage, which makes SAM more robust in long videos.
We also design a novel self-correction strategy so that the network can repair the embeddings of masks with different qualities in the memory bank.
arXiv Detail & Related papers (2022-05-08T02:24:43Z) - Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory [75.65949969000596]
Episodic and semantic memory are critical components of the human memory model.
We develop a new principled Bayesian memory allocation scheme that bridges the gap between episodic and semantic memory.
We demonstrate that this allocation scheme improves performance in memory conditional image generation.
arXiv Detail & Related papers (2021-02-20T18:40:40Z) - DMV: Visual Object Tracking via Part-level Dense Memory and Voting-based
Retrieval [61.366644088881735]
We propose a novel memory-based tracker via part-level dense memory and voting-based retrieval, called DMV.
We also propose a novel voting mechanism for the memory reading to filter out unreliable information in the memory.
arXiv Detail & Related papers (2020-03-20T10:05:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.