Strategy to Increase the Safety of a DNN-based Perception for HAD
Systems
- URL: http://arxiv.org/abs/2002.08935v1
- Date: Thu, 20 Feb 2020 18:32:53 GMT
- Title: Strategy to Increase the Safety of a DNN-based Perception for HAD
Systems
- Authors: Timo S\"amann, Peter Schlicht, Fabian H\"uger
- Abstract summary: Safety is one of the most important development goals for automated driving systems.
For these, large parts of the traditional safety processes and requirements are not fully applicable or sufficient.
This paper presents a framework for the description and mitigation of DNN insufficiencies and the derivation of relevant safety mechanisms.
- Score: 6.140206215951371
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Safety is one of the most important development goals for highly automated
driving (HAD) systems. This applies in particular to the perception function
driven by Deep Neural Networks (DNNs). For these, large parts of the
traditional safety processes and requirements are not fully applicable or
sufficient. The aim of this paper is to present a framework for the description
and mitigation of DNN insufficiencies and the derivation of relevant safety
mechanisms to increase the safety of DNNs. To assess the effectiveness of these
safety mechanisms, we present a categorization scheme for evaluation metrics.
Related papers
- Dynamic safety cases for frontier AI [0.7538606213726908]
This paper proposes a Dynamic Safety Case Management System (DSCMS) to support both the initial creation of a safety case and its systematic, semi-automated revision over time.
We demonstrate this approach on a safety case template for offensive cyber capabilities and suggest ways it can be integrated into governance structures for safety-critical decision-making.
arXiv Detail & Related papers (2024-12-23T14:43:41Z) - SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach [58.93030774141753]
Multimodal foundation models (MFMs) represent a significant advancement in artificial intelligence.
This paper conceptualizes cybersafety and cybersecurity in the context of multimodal learning.
We present a comprehensive Systematization of Knowledge (SoK) to unify these concepts in MFMs, identifying key threats.
arXiv Detail & Related papers (2024-11-17T23:06:20Z) - SafetyAnalyst: Interpretable, transparent, and steerable safety moderation for AI behavior [56.10557932893919]
We present SafetyAnalyst, a novel AI safety moderation framework.
Given an AI behavior, SafetyAnalyst uses chain-of-thought reasoning to analyze its potential consequences.
It aggregates all harmful and beneficial effects into a harmfulness score using fully interpretable weight parameters.
arXiv Detail & Related papers (2024-10-22T03:38:37Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Deep Learning Safety Concerns in Automated Driving Perception [43.026485214492105]
This paper introduces an additional categorization for a better understanding as well as enabling cross-functional teams to jointly address the concerns.
Recent advances in the field of deep learning and impressive performance of deep neural networks (DNNs) for perception have resulted in an increased demand for their use in automated driving (AD) systems.
arXiv Detail & Related papers (2023-09-07T15:25:47Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Assumption Generation for the Verification of Learning-Enabled
Autonomous Systems [7.580719272198119]
We present an assume-guarantee style compositional approach for the formal verification of system-level safety properties.
We illustrate our approach on a case study taken from the autonomous airplanes domain.
arXiv Detail & Related papers (2023-05-27T23:30:27Z) - A Novel Online Incremental Learning Intrusion Prevention System [2.5234156040689237]
This paper proposes a novel Network Intrusion Prevention System that utilise a SelfOrganizing Incremental Neural Network along with a Support Vector Machine.
Due to its structure, the proposed system provides a security solution that does not rely on signatures or rules and is capable to mitigate known and unknown attacks in real-time with high accuracy.
arXiv Detail & Related papers (2021-09-20T13:30:11Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.