Implementation of the Collision Avoidance System for DO-178C Compliance
- URL: http://arxiv.org/abs/2509.16844v1
- Date: Sat, 20 Sep 2025 23:52:51 GMT
- Title: Implementation of the Collision Avoidance System for DO-178C Compliance
- Authors: Rim Zrelli, Henrique Amaral Misson, Sorelle Kamkuimo, Maroua Ben Attia, Abdo Shabah, Felipe Gohring de Magalhaes, Gabriela Nicolescu,
- Abstract summary: The CAS is designed to autonomously detect, evaluate, and avoid potential collision threats in real-time.<n>The report documents each phase of the software lifecycle: requirements specification and validation, architectural and detailed design, coding, verification, and traceability.<n>Although the integration phase was not fully implemented, the approach proved effective in addressing certification challenges for UAV safety-critical systems.
- Score: 0.02345344155381704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This technical report presents the detailed implementation of a Collision Avoidance System (CAS) for Unmanned Aerial Vehicles (UAVs), developed as a case study to demonstrate a rigorous methodology for achieving DO-178C compliance in safety-critical software. The CAS is based on functional requirements inspired by NASA's Access 5 project and is designed to autonomously detect, evaluate, and avoid potential collision threats in real-time, supporting the safe integration of UAVs into civil airspace. The implementation environment combines formal methods, model-based development, and automated verification tools, including Alloy, SPIN, Simulink Embedded Coder, and the LDRA tool suite. The report documents each phase of the software lifecycle: requirements specification and validation, architectural and detailed design, coding, verification, and traceability, with a strong focus on compliance with DO-178C Design Assurance Level B objectives. Results demonstrate that formal modelling and automated toolchains enabled early detection and correction of specification defects, robust traceability, and strong evidence of verification and validation across all development stages. Static and dynamic analyses confirmed code quality and coverage, while formal verification methods provided mathematical assurance of correctness for critical components. Although the integration phase was not fully implemented, the approach proved effective in addressing certification challenges for UAV safety-critical systems. \keywords Collision Avoidance System (CAS), Unmanned Aerial Vehicles (UAVs), DO-178C compliance, Safety-critical software, Formal methods, Model-based development, Alloy, SPIN model checker, Simulink Embedded Coder, LDRA tool suite, Software verification and validation, Traceability, Certification.
Related papers
- Towards Verifiably Safe Tool Use for LLM Agents [53.55621104327779]
Large language model (LLM)-based AI agents extend capabilities by enabling access to tools such as data sources, APIs, search engines, code sandboxes, and even other agents.<n>LLMs may invoke unintended tool interactions and introduce risks, such as leaking sensitive data or overwriting critical records.<n>Current approaches to mitigate these risks, such as model-based safeguards, enhance agents' reliability but cannot guarantee system safety.
arXiv Detail & Related papers (2026-01-12T21:31:38Z) - LLMs-Powered Real-Time Fault Injection: An Approach Toward Intelligent Fault Test Cases Generation [1.9435397960631864]
A novel Large Language Models (LLMs)-assisted fault test cases (TCs) generation approach is proposed in this paper.<n>The proposed approach exhibits high performance in terms of FSRs classification and fault TCs generation with F1-score of 88% and 97.5%, respectively.
arXiv Detail & Related papers (2025-11-24T13:57:31Z) - AI Bill of Materials and Beyond: Systematizing Security Assurance through the AI Risk Scanning (AIRS) Framework [31.261980405052938]
Assurance for artificial intelligence (AI) systems remains fragmented across software supply-chain security, adversarial machine learning, and governance documentation.<n>This paper introduces the AI Risk Scanning (AIRS) Framework, a threat-model-based, evidence-generating framework designed to operationalize AI assurance.
arXiv Detail & Related papers (2025-11-16T16:10:38Z) - "Show Me You Comply... Without Showing Me Anything": Zero-Knowledge Software Auditing for AI-Enabled Systems [2.2981698355892686]
This paper introduces ZKMLOps, a novel MLOps verification framework.<n>It operationalizes Zero-Knowledge Proofs (ZKPs)-cryptographic protocols allowing a prover to convince a verifier that a statement is true.<n>We evaluate the framework's practicality through a study of regulatory compliance in financial risk auditing.
arXiv Detail & Related papers (2025-10-30T15:03:32Z) - Validating Solidity Code Defects using Symbolic and Concrete Execution powered by Large Language Models [0.0]
This paper introduces a novel detection pipeline that integrates custom Slither-based detectors, Large Language Models (LLMs), Kontrol, and Forge.<n>Our approach is designed to reliably detect defects and generate proofs.
arXiv Detail & Related papers (2025-09-16T12:46:11Z) - Training Language Models to Generate Quality Code with Program Analysis Feedback [66.0854002147103]
Code generation with large language models (LLMs) is increasingly adopted in production but fails to ensure code quality.<n>We propose REAL, a reinforcement learning framework that incentivizes LLMs to generate production-quality code.
arXiv Detail & Related papers (2025-05-28T17:57:47Z) - An Automated Blackbox Noncompliance Checker for QUIC Server Implementations [2.9248916859490173]
QUICtester is an automated approach for uncovering non-compliant behaviors in the ratified QUIC protocol implementations (RFC 9000/).<n>We used QUICtester to analyze 186 learned models from 19 QUIC implementations under the five security settings and discovered 55 implementation errors.
arXiv Detail & Related papers (2025-05-19T04:28:49Z) - Automated Proof Generation for Rust Code via Self-Evolution [69.25795662658356]
We introduce SAFE, a framework that overcomes the lack of human-written snippets to enable automated proof generation of Rust code.<n> SAFE re-purposes the large number of synthesized incorrect proofs to train the self-ging capability of the fine-tuned models.<n>We achieve a 52.52% accuracy rate in a benchmark crafted by human experts, a significant leap over GPT-4o's performance of 14.39%.
arXiv Detail & Related papers (2024-10-21T08:15:45Z) - Data-Driven Distributionally Robust Safety Verification Using Barrier Certificates and Conditional Mean Embeddings [0.24578723416255752]
We develop scalable formal verification algorithms without shifting the problem to unrealistic assumptions.
In a pursuit of developing scalable formal verification algorithms without shifting the problem to unrealistic assumptions, we employ the concept of barrier certificates.
We show how to solve the resulting program efficiently using sum-of-squares optimization and a Gaussian process envelope.
arXiv Detail & Related papers (2024-03-15T17:32:02Z) - Simulation-based Safety Assurance for an AVP System incorporating
Learning-Enabled Components [0.6526824510982802]
Testing, verification and validation AD/ADAS safety-critical applications remain as one the main challenges.
We explain the simulation-based development platform that is designed to verify and validate safety-critical learning-enabled systems.
arXiv Detail & Related papers (2023-09-28T09:00:31Z) - A General Framework for Verification and Control of Dynamical Models via Certificate Synthesis [54.959571890098786]
We provide a framework to encode system specifications and define corresponding certificates.
We present an automated approach to formally synthesise controllers and certificates.
Our approach contributes to the broad field of safe learning for control, exploiting the flexibility of neural networks.
arXiv Detail & Related papers (2023-09-12T09:37:26Z) - Joint Differentiable Optimization and Verification for Certified
Reinforcement Learning [91.93635157885055]
In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties.
We propose a framework that jointly conducts reinforcement learning and formal verification.
arXiv Detail & Related papers (2022-01-28T16:53:56Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z) - A Survey of Algorithms for Black-Box Safety Validation of Cyber-Physical
Systems [30.638615396429536]
Motivated by the prevalence of safety-critical artificial intelligence, this work provides a survey of state-of-the-art safety validation techniques for CPS.
We present and discuss algorithms in the domains of optimization, path planning, reinforcement learning, and importance sampling.
A brief overview of safety-critical applications is given, including autonomous vehicles and aircraft collision avoidance systems.
arXiv Detail & Related papers (2020-05-06T17:31:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.