Beyond Tradition: Evaluating Agile feasibility in DO-178C for Aerospace
Software Development
- URL: http://arxiv.org/abs/2311.04344v1
- Date: Tue, 7 Nov 2023 20:58:02 GMT
- Title: Beyond Tradition: Evaluating Agile feasibility in DO-178C for Aerospace
Software Development
- Authors: J. Eduardo Ferreira Ribeiro, Jo\~ao Gabriel Silva, Ademar Aguiar
- Abstract summary: Domain-specific standards and guidelines play a crucial role in regulating safety-critical systems.
This paper analyses the DO-178C document within the context of software development for safety-critical aerospace systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain-specific standards and guidelines play a crucial role in regulating
safety-critical systems, with one notable example being the DO-178C document
for the aerospace industry. This document provides guidelines for organisations
seeking to ensure the safety and certification of their software systems. This
paper analyses the DO-178C document within the context of software development
for safety-critical aerospace systems focusing on Agile software development,
aiming to assess its feasibility. Unlike restricting specific development
methods, DO-178C offers indispensable support that upholds confidence in
safety, aligning seamlessly with the objectives of aerospace industries. Our
analysis reveals that there are no limitations or restrictions within the
DO-178C that inhibit the adoption of Agile and provides guidelines and
objectives for achieving suitable evidence, allowing for various working
methods, including Agile methods, contrary to the overall opinion in the
industry that the traditional waterfall method is mandatory. Additionally, we
emphasise that the guidelines explanation is explicitly tailored to software
professionals using Agile methods, giving it a much more specific focus than
publications that only provide a generic overview of the standard.
Related papers
- Advancing Embodied Agent Security: From Safety Benchmarks to Input Moderation [52.83870601473094]
Embodied agents exhibit immense potential across a multitude of domains.
Existing research predominantly concentrates on the security of general large language models.
This paper introduces a novel input moderation framework, meticulously designed to safeguard embodied agents.
arXiv Detail & Related papers (2025-04-22T08:34:35Z) - From Waterfallish Aerospace Certification onto Agile Certifiable Iterations [0.0]
We present a strategy and tools that support the generation of continuous documentation complying with DO-178C requirements.
By iteratively creating the DO-178C documentation associated with each software component, we open the way to truly continuous certifiable iterations.
arXiv Detail & Related papers (2025-03-06T09:49:57Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.
The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - Analysis of Functional Insufficiencies and Triggering Conditions to Improve the SOTIF of an MPC-based Trajectory Planner [2.555222031881788]
Safety-of-the-intended-function (SOTIF) has moved into the center of attention, whose standard, ISO21448, has only been released in 2022.
This paper aims to make two main contributions: (1) an analysis of the SOTIF for a generic MPC-based trajectory planner and (2) an interpretation and concrete application of the generic procedures described in ISO21448.
arXiv Detail & Related papers (2024-07-31T12:52:13Z) - AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies [80.90138009539004]
AIR-Bench 2024 is the first AI safety benchmark aligned with emerging government regulations and company policies.
It decomposes 8 government regulations and 16 company policies into a four-tiered safety taxonomy with granular risk categories in the lowest tier.
We evaluate leading language models on AIR-Bench 2024, uncovering insights into their alignment with specified safety concerns.
arXiv Detail & Related papers (2024-07-11T21:16:48Z) - INDICT: Code Generation with Internal Dialogues of Critiques for Both Security and Helpfulness [110.6921470281479]
We introduce INDICT: a new framework that empowers large language models with Internal Dialogues of Critiques for both safety and helpfulness guidance.
The internal dialogue is a dual cooperative system between a safety-driven critic and a helpfulness-driven critic.
We observed that our approach can provide an advanced level of critiques of both safety and helpfulness analysis, significantly improving the quality of output codes.
arXiv Detail & Related papers (2024-06-23T15:55:07Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - What Can Self-Admitted Technical Debt Tell Us About Security? A
Mixed-Methods Study [6.286506087629511]
Self-Admitted Technical Debt (SATD)
can be deemed as dreadful sources of information on potentially exploitable vulnerabilities and security flaws.
This work investigates the security implications of SATD from a technical and developer-centred perspective.
arXiv Detail & Related papers (2024-01-23T13:48:49Z) - No Trust without regulation! [0.0]
The explosion in performance of Machine Learning (ML) and the potential of its applications are encouraging us to consider its use in industrial systems.
It is still leaving too much to one side the issue of safety and its corollary, regulation and standards.
The European Commission has laid the foundations for moving forward and building solid approaches to the integration of AI-based applications that are safe, trustworthy and respect European ethical values.
arXiv Detail & Related papers (2023-09-27T09:08:41Z) - Validation-Driven Development [54.50263643323]
This paper introduces a validation-driven development (VDD) process that prioritizes validating requirements in formal development.
The effectiveness of the VDD process is demonstrated through a case study in the aviation industry.
arXiv Detail & Related papers (2023-08-11T09:15:26Z) - Building a Credible Case for Safety: Waymo's Approach for the
Determination of Absence of Unreasonable Risk [2.2386635730984117]
A safety case for fully autonomous operations is a formal way to explain how a company determines that an AV system is safe.
It involves an explanation of the system, the methodologies used to develop it, the metrics used to validate it and the actual results of validation tests.
This paper helps enabling such alignment by providing foundational thinking into how a system is determined to be ready for deployment.
arXiv Detail & Related papers (2023-06-02T21:05:39Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Correct-by-Construction Runtime Enforcement in AI -- A Survey [3.509295509987626]
Enforcement refers to the theories, techniques, and tools for enforcing correct behavior with respect to a formal specification of systems at runtime.
We discuss how safety is traditionally handled in the field of AI and how more formal guarantees on the safety of a self-learning agent can be given by integrating a runtime enforcer.
arXiv Detail & Related papers (2022-08-30T17:45:38Z) - Empowered and Embedded: Ethics and Agile Processes [60.63670249088117]
We argue that ethical considerations need to be embedded into the (agile) software development process.
We put emphasis on the possibility to implement ethical deliberations in already existing and well established agile software development processes.
arXiv Detail & Related papers (2021-07-15T11:14:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.