Safe Perception -- A Hierarchical Monitor Approach
- URL: http://arxiv.org/abs/2208.00824v1
- Date: Mon, 1 Aug 2022 13:09:24 GMT
- Title: Safe Perception -- A Hierarchical Monitor Approach
- Authors: Cornelius Buerkle, Fabian Oboril, Johannes Burr and Kay-Ulrich Scholl
- Abstract summary: We propose a novel hierarchical monitoring approach for AI-based perception systems.
It can reliably detect detection misses, and at the same time has a very low false alarm rate.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Our transportation world is rapidly transforming induced by an ever
increasing level of autonomy. However, to obtain license of fully automated
vehicles for widespread public use, it is necessary to assure safety of the
entire system, which is still a challenge. This holds in particular for
AI-based perception systems that have to handle a diversity of environmental
conditions and road users, and at the same time should robustly detect all
safety relevant objects (i.e no detection misses should occur). Yet, limited
training and validation data make a proof of fault-free operation hardly
achievable, as the perception system might be exposed to new, yet unknown
objects or conditions on public roads. Hence, new safety approaches for
AI-based perception systems are required. For this reason we propose in this
paper a novel hierarchical monitoring approach that is able to validate the
object list from a primary perception system, can reliably detect detection
misses, and at the same time has a very low false alarm rate.
Related papers
- Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Detecting and Mitigating System-Level Anomalies of Vision-Based Controllers [7.095058159492494]
Vision-based controllers can make erroneous predictions when faced with novel or out-of-distribution inputs.
In this work, we introduce a run-time anomaly monitor to detect and mitigate such closed-loop, system-level failures.
We validate the proposed approach on an autonomous aircraft taxiing system that uses a vision-based controller for taxiing.
arXiv Detail & Related papers (2023-09-23T20:33:38Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Verifiable Obstacle Detection [10.277825331268179]
We present a safety verification of an existing LiDAR based classical obstacle detection algorithm.
We provide a rigorous analysis of the obstacle detection system with empirical results based on real-world sensor data.
arXiv Detail & Related papers (2022-08-30T17:15:35Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - An Empirical Analysis of the Use of Real-Time Reachability for the
Safety Assurance of Autonomous Vehicles [7.1169864450668845]
We propose using a real-time reachability algorithm for the implementation of the simplex architecture to assure the safety of a 1/10 scale open source autonomous vehicle platform.
In our approach, the need to analyze an underlying controller is abstracted away, instead focusing on the effects of the controller's decisions on the system's future states.
arXiv Detail & Related papers (2022-05-03T11:12:29Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Robustness Enhancement of Object Detection in Advanced Driver Assistance
Systems (ADAS) [0.0]
The proposed system includes two main components: (1) a compact one-stage object detector which is expected to be able to perform at a comparable accuracy compared to state-of-the-art object detectors, and (2) an environmental condition detector that helps to send a warning signal to the cloud in case the self-driving car needs human actions due to the significance of the situation.
arXiv Detail & Related papers (2021-05-04T15:42:43Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - A Survey of Algorithms for Black-Box Safety Validation of Cyber-Physical
Systems [30.638615396429536]
Motivated by the prevalence of safety-critical artificial intelligence, this work provides a survey of state-of-the-art safety validation techniques for CPS.
We present and discuss algorithms in the domains of optimization, path planning, reinforcement learning, and importance sampling.
A brief overview of safety-critical applications is given, including autonomous vehicles and aircraft collision avoidance systems.
arXiv Detail & Related papers (2020-05-06T17:31:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.