Towards Modified Condition/Decision Coverage of Rust
- URL: http://arxiv.org/abs/2409.08708v1
- Date: Fri, 13 Sep 2024 10:53:32 GMT
- Title: Towards Modified Condition/Decision Coverage of Rust
- Authors: Wanja Zaeske, Pietro Albini, Florian Gilcher, Umut Durak,
- Abstract summary: Modified Condition/Decision Coverage (MC/DC) is the highest software assurance level in aviation.
Some central features of the Rust programming language necessitate further clarification.
This paper informs the implementation of Rust MC/DC tools, paving the road towards Rust in high-assurance applications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Testing is an essential tool to assure software, especially so in safety-critical applications. To quantify how thoroughly a software item has been tested, a test coverage metric is required. Maybe the strictest such metric known in the safety critical systems is Modified Condition/Decision Coverage (MC/DC), which DO-178C prescribes for the highest software assurance level in aviation. In the past, ambiguities in the interpretation of MC/DC have been resolved already, i. e. in CAST-10. However, some central features of the Rust programming language necessitate further clarification. This work investigates aforementioned features, in particular pattern matching, providing a consistent view on how to apply MC/DC to Rust. Hence, this paper informs the implementation of Rust MC/DC tools, paving the road towards Rust in high-assurance applications.
Related papers
- RustMC: Extending the GenMC stateless model checker to Rust [0.2799243500184682]
RustMC is a stateless model checker that enables verification of concurrent Rust programs.
This tool paper presents the key challenges we addressed to extend GenMC.
arXiv Detail & Related papers (2025-02-10T09:33:24Z) - SVTRv2: CTC Beats Encoder-Decoder Models in Scene Text Recognition [77.28814034644287]
We propose SVTRv2, a CTC model that beats leading EDTRs in both accuracy and inference speed.
SVTRv2 introduces novel upgrades to handle text irregularity and utilize linguistic context.
We evaluate SVTRv2 in both standard and recent challenging benchmarks.
arXiv Detail & Related papers (2024-11-24T14:21:35Z) - Automated Proof Generation for Rust Code via Self-Evolution [69.25795662658356]
We introduce SAFE, a novel framework that overcomes the lack of human-written proof to enable automated proof generation of Rust code.
We demonstrate superior efficiency and precision compared to GPT-4o.
This advancement leads to a significant improvement in performance, achieving a 70.50% accuracy rate in a benchmark crafted by human experts.
arXiv Detail & Related papers (2024-10-21T08:15:45Z) - NExT: Teaching Large Language Models to Reason about Code Execution [50.93581376646064]
Large language models (LLMs) of code are typically trained on the surface textual form of programs.
We propose NExT, a method to teach LLMs to inspect the execution traces of programs and reason about their run-time behavior.
arXiv Detail & Related papers (2024-04-23T01:46:32Z) - Towards a Transpiler for C/C++ to Safer Rust [0.10993800728351737]
Rust is a programming language developed by Mozilla that focuses on performance and safety.
How to convert an existing C++ code base to Rust is also gaining greater attention.
arXiv Detail & Related papers (2024-01-16T10:35:59Z) - Fake Alignment: Are LLMs Really Aligned Well? [91.26543768665778]
This study investigates the substantial discrepancy in performance between multiple-choice questions and open-ended questions.
Inspired by research on jailbreak attack patterns, we argue this is caused by mismatched generalization.
arXiv Detail & Related papers (2023-11-10T08:01:23Z) - Fast Summary-based Whole-program Analysis to Identify Unsafe Memory Accesses in Rust [23.0568924498396]
Rust is one of the most promising systems programming languages to solve the memory safety issues that have plagued low-level software for over forty years.
unsafe Rust code and directly-linked unsafe foreign libraries may not only introduce memory safety violations themselves but also compromise the entire program as they run in the same monolithic address space as the safe Rust.
We have prototyped a whole-program analysis for identifying both unsafe heap allocations and memory accesses to those unsafe heap objects.
arXiv Detail & Related papers (2023-10-16T11:34:21Z) - Fixing Rust Compilation Errors using LLMs [2.1781086368581932]
The Rust programming language has established itself as a viable choice for low-level systems programming language over the traditional, unsafe alternatives like C/C++.
This paper presents a tool called RustAssistant that leverages the emergent capabilities of Large Language Models (LLMs) to automatically suggest fixes for Rust compilation errors.
RustAssistant is able to achieve an impressive peak accuracy of roughly 74% on real-world compilation errors in popular open-source Rust repositories.
arXiv Detail & Related papers (2023-08-09T18:30:27Z) - Is unsafe an Achilles' Heel? A Comprehensive Study of Safety
Requirements in Unsafe Rust Programming [4.981203415693332]
Rust is an emerging, strongly-typed programming language focusing on efficiency and memory safety.
Current unsafe API documents in the standard library exhibited variations, including inconsistency and insufficiency.
To enhance Rust security, we suggest unsafe API documents to list systematic descriptions of safety requirements for users to follow.
arXiv Detail & Related papers (2023-08-09T08:16:10Z) - Safe Deep Reinforcement Learning by Verifying Task-Level Properties [84.64203221849648]
Cost functions are commonly employed in Safe Deep Reinforcement Learning (DRL)
The cost is typically encoded as an indicator function due to the difficulty of quantifying the risk of policy decisions in the state space.
In this paper, we investigate an alternative approach that uses domain knowledge to quantify the risk in the proximity of such states by defining a violation metric.
arXiv Detail & Related papers (2023-02-20T15:24:06Z) - Certified Interpretability Robustness for Class Activation Mapping [77.58769591550225]
We present CORGI, short for Certifiably prOvable Robustness Guarantees for Interpretability mapping.
CORGI is an algorithm that takes in an input image and gives a certifiable lower bound for the robustness of its CAM interpretability map.
We show the effectiveness of CORGI via a case study on traffic sign data, certifying lower bounds on the minimum adversarial perturbation.
arXiv Detail & Related papers (2023-01-26T18:58:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.