Towards Audit Requirements for AI-based Systems in Mobility Applications
- URL: http://arxiv.org/abs/2302.13567v1
- Date: Mon, 27 Feb 2023 07:57:52 GMT
- Title: Towards Audit Requirements for AI-based Systems in Mobility Applications
- Authors: Devi Padmavathi Alagarswamy, Christian Berghoff, Vasilios Danos,
Fabian Langer, Thora Markert, Georg Schneider, Arndt von Twickel, Fabian
Woitschek
- Abstract summary: We propose 50 technical requirements or best practices that extend existing regulations and address the concrete needs for deep neural networks (DNNs)
We show the applicability, usefulness and meaningfulness of the proposed requirements by performing an exemplary audit of a DNN-based traffic sign recognition system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Various mobility applications like advanced driver assistance systems
increasingly utilize artificial intelligence (AI) based functionalities.
Typically, deep neural networks (DNNs) are used as these provide the best
performance on the challenging perception, prediction or planning tasks that
occur in real driving environments. However, current regulations like UNECE R
155 or ISO 26262 do not consider AI-related aspects and are only applied to
traditional algorithm-based systems. The non-existence of AI-specific standards
or norms prevents the practical application and can harm the trust level of
users. Hence, it is important to extend existing standardization for security
and safety to consider AI-specific challenges and requirements. To take a step
towards a suitable regulation we propose 50 technical requirements or best
practices that extend existing regulations and address the concrete needs for
DNN-based systems. We show the applicability, usefulness and meaningfulness of
the proposed requirements by performing an exemplary audit of a DNN-based
traffic sign recognition system using three of the proposed requirements.
Related papers
- A Method for the Runtime Validation of AI-based Environment Perception in Automated Driving System [2.369782235753731]
Environment perception is a fundamental part of the dynamic driving task executed by Autonomous Driving Systems.
Current safety-relevant standards for automotive systems assume the existence of comprehensive requirements specifications.
This paper presents a function monitor for the functional runtime monitoring of a two-folded AI-based environment perception for ADS.
arXiv Detail & Related papers (2024-12-21T20:21:49Z) - Navigating the sociotechnical labyrinth: Dynamic certification for responsible embodied AI [19.959138971887395]
We argue that sociotechnical requirements shape the governance of artificially intelligent (AI) systems.
Our proposed transdisciplinary approach is designed to ensure the safe, ethical, and practical deployment of AI systems.
arXiv Detail & Related papers (2024-08-16T08:35:26Z) - Runtime Monitoring DNN-Based Perception [5.518665721709856]
This tutorial aims to provide readers with a glimpse of techniques proposed in the literature.
We start with classical methods proposed in the machine learning community, then highlight a few techniques proposed by the formal methods community.
We conclude by highlighting the need to rigorously design monitors, where data availability outside the operational domain plays an important role.
arXiv Detail & Related papers (2023-10-06T03:57:56Z) - Simulation-based Safety Assurance for an AVP System incorporating
Learning-Enabled Components [0.6526824510982802]
Testing, verification and validation AD/ADAS safety-critical applications remain as one the main challenges.
We explain the simulation-based development platform that is designed to verify and validate safety-critical learning-enabled systems.
arXiv Detail & Related papers (2023-09-28T09:00:31Z) - No Trust without regulation! [0.0]
The explosion in performance of Machine Learning (ML) and the potential of its applications are encouraging us to consider its use in industrial systems.
It is still leaving too much to one side the issue of safety and its corollary, regulation and standards.
The European Commission has laid the foundations for moving forward and building solid approaches to the integration of AI-based applications that are safe, trustworthy and respect European ethical values.
arXiv Detail & Related papers (2023-09-27T09:08:41Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - ROAD-R: The Autonomous Driving Dataset with Logical Requirements [54.608762221119406]
We introduce the ROad event Awareness dataset with logical Requirements (ROAD-R)
ROAD-R is the first publicly available dataset for autonomous driving with requirements expressed as logical constraints.
We show that it is possible to exploit them to create models that (i) have a better performance, and (ii) are guaranteed to be compliant with the requirements themselves.
arXiv Detail & Related papers (2022-10-04T13:22:19Z) - Safe RAN control: A Symbolic Reinforcement Learning Approach [62.997667081978825]
We present a Symbolic Reinforcement Learning (SRL) based architecture for safety control of Radio Access Network (RAN) applications.
We provide a purely automated procedure in which a user can specify high-level logical safety specifications for a given cellular network topology.
We introduce a user interface (UI) developed to help a user set intent specifications to the system, and inspect the difference in agent proposed actions.
arXiv Detail & Related papers (2021-06-03T16:45:40Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Towards AIOps in Edge Computing Environments [60.27785717687999]
This paper describes the system design of an AIOps platform which is applicable in heterogeneous, distributed environments.
It is feasible to collect metrics with a high frequency and simultaneously run specific anomaly detection algorithms directly on edge devices.
arXiv Detail & Related papers (2021-02-12T09:33:00Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.