Strategy to Increase the Safety of a DNN-based Perception for HAD
Systems
- URL: http://arxiv.org/abs/2002.08935v1
- Date: Thu, 20 Feb 2020 18:32:53 GMT
- Title: Strategy to Increase the Safety of a DNN-based Perception for HAD
Systems
- Authors: Timo S\"amann, Peter Schlicht, Fabian H\"uger
- Abstract summary: Safety is one of the most important development goals for automated driving systems.
For these, large parts of the traditional safety processes and requirements are not fully applicable or sufficient.
This paper presents a framework for the description and mitigation of DNN insufficiencies and the derivation of relevant safety mechanisms.
- Score: 6.140206215951371
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Safety is one of the most important development goals for highly automated
driving (HAD) systems. This applies in particular to the perception function
driven by Deep Neural Networks (DNNs). For these, large parts of the
traditional safety processes and requirements are not fully applicable or
sufficient. The aim of this paper is to present a framework for the description
and mitigation of DNN insufficiencies and the derivation of relevant safety
mechanisms to increase the safety of DNNs. To assess the effectiveness of these
safety mechanisms, we present a categorization scheme for evaluation metrics.
Related papers
- Unifying Qualitative and Quantitative Safety Verification of DNN-Controlled Systems [18.049286149364075]
The rapid advance of deep reinforcement learning techniques enables the oversight of safety-critical systems through the utilization of Deep Neural Networks (DNNs)
Most of the existing verification approaches rely on qualitative approaches, predominantly employing reachability analysis.
We propose a novel framework for unifying both qualitative and quantitative safety verification problems.
arXiv Detail & Related papers (2024-04-02T09:31:51Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Deep Learning Safety Concerns in Automated Driving Perception [43.026485214492105]
This paper introduces an additional categorization for a better understanding as well as enabling cross-functional teams to jointly address the concerns.
Recent advances in the field of deep learning and impressive performance of deep neural networks (DNNs) for perception have resulted in an increased demand for their use in automated driving (AD) systems.
arXiv Detail & Related papers (2023-09-07T15:25:47Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Assumption Generation for the Verification of Learning-Enabled
Autonomous Systems [7.580719272198119]
We present an assume-guarantee style compositional approach for the formal verification of system-level safety properties.
We illustrate our approach on a case study taken from the autonomous airplanes domain.
arXiv Detail & Related papers (2023-05-27T23:30:27Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - A Novel Online Incremental Learning Intrusion Prevention System [2.5234156040689237]
This paper proposes a novel Network Intrusion Prevention System that utilise a SelfOrganizing Incremental Neural Network along with a Support Vector Machine.
Due to its structure, the proposed system provides a security solution that does not rely on signatures or rules and is capable to mitigate known and unknown attacks in real-time with high accuracy.
arXiv Detail & Related papers (2021-09-20T13:30:11Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.