A Flow-based Credibility Metric for Safety-critical Pedestrian Detection
- URL: http://arxiv.org/abs/2402.07642v1
- Date: Mon, 12 Feb 2024 13:30:34 GMT
- Title: A Flow-based Credibility Metric for Safety-critical Pedestrian Detection
- Authors: Maria Lyssenko, Christoph Gladisch, Christian Heinzemann, Matthias
Woehrle, Rudolph Triebel
- Abstract summary: Safety is of utmost importance for perception in automated driving (AD)
Standard evaluation schemes utilize safety-agnostic metrics to argue sufficient detection performance.
This paper introduces a novel credibility metric, called c-flow, for pedestrian bounding boxes.
- Score: 16.663568842153065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Safety is of utmost importance for perception in automated driving (AD).
However, a prime safety concern in state-of-the art object detection is that
standard evaluation schemes utilize safety-agnostic metrics to argue sufficient
detection performance. Hence, it is imperative to leverage supplementary domain
knowledge to accentuate safety-critical misdetections during evaluation tasks.
To tackle the underspecification, this paper introduces a novel credibility
metric, called c-flow, for pedestrian bounding boxes. To this end, c-flow
relies on a complementary optical flow signal from image sequences and enhances
the analyses of safety-critical misdetections without requiring additional
labels. We implement and evaluate c-flow with a state-of-the-art pedestrian
detector on a large AD dataset. Our analysis demonstrates that c-flow allows
developers to identify safety-critical misdetections.
Related papers
- LSM: A Comprehensive Metric for Assessing the Safety of Lane Detection Systems in Autonomous Driving [0.5326090003728084]
We propose the Lane Safety Metric (LSM) to evaluate the safety of lane detection systems.
Additional factors such as the semantics of the scene with road type and road width should be considered for the evaluation of lane detection.
We evaluate our offline safety metric on various virtual scenarios using different lane detection approaches and compare it with state-of-the-art performance metrics.
arXiv Detail & Related papers (2024-07-10T15:11:37Z) - A Safety-Adapted Loss for Pedestrian Detection in Automated Driving [13.676179470606844]
In safety-critical domains, errors by the object detector may endanger pedestrians and other vulnerable road users.
We propose a safety-aware loss variation that leverages the estimated per-pedestrian criticality scores during training.
arXiv Detail & Related papers (2024-02-05T13:16:38Z) - Towards Building Self-Aware Object Detectors via Reliable Uncertainty
Quantification and Calibration [17.461451218469062]
In this work, we introduce the Self-Aware Object Detection (SAOD) task.
The SAOD task respects and adheres to the challenges that object detectors face in safety-critical environments such as autonomous driving.
We extensively use our framework, which introduces novel metrics and large scale test datasets, to test numerous object detectors.
arXiv Detail & Related papers (2023-07-03T11:16:39Z) - Safe Deep Reinforcement Learning by Verifying Task-Level Properties [84.64203221849648]
Cost functions are commonly employed in Safe Deep Reinforcement Learning (DRL)
The cost is typically encoded as an indicator function due to the difficulty of quantifying the risk of policy decisions in the state space.
In this paper, we investigate an alternative approach that uses domain knowledge to quantify the risk in the proximity of such states by defining a violation metric.
arXiv Detail & Related papers (2023-02-20T15:24:06Z) - Online Safety Property Collection and Refinement for Safe Deep
Reinforcement Learning in Mapless Navigation [79.89605349842569]
We introduce the Collection and Refinement of Online Properties (CROP) framework to design properties at training time.
CROP employs a cost signal to identify unsafe interactions and use them to shape safety properties.
We evaluate our approach in several robotic mapless navigation tasks and demonstrate that the violation metric computed with CROP allows higher returns and lower violations over previous Safe DRL approaches.
arXiv Detail & Related papers (2023-02-13T21:19:36Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Benchmarking Safe Deep Reinforcement Learning in Aquatic Navigation [78.17108227614928]
We propose a benchmark environment for Safe Reinforcement Learning focusing on aquatic navigation.
We consider a value-based and policy-gradient Deep Reinforcement Learning (DRL)
We also propose a verification strategy that checks the behavior of the trained models over a set of desired properties.
arXiv Detail & Related papers (2021-12-16T16:53:56Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Scalable Synthesis of Verified Controllers in Deep Reinforcement
Learning [0.0]
We propose an automated verification pipeline capable of synthesizing high-quality safety shields.
Our key insight involves separating safety verification from neural controller, using pre-computed verified safety shields to constrain neural controller training.
Experimental results over a range of realistic high-dimensional deep RL benchmarks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2021-04-20T19:30:29Z) - SafeML: Safety Monitoring of Machine Learning Classifiers through
Statistical Difference Measure [1.2599533416395765]
This paper aims to address both safety and security within a single concept of protection applicable during the operation of ML systems.
We use distance measures of the Empirical Cumulative Distribution Function (ECDF) to monitor the behaviour and the operational context of the data-driven system.
Our preliminary findings indicate that the approach can provide a basis for detecting whether the application context of an ML component is valid in the safety-security.
arXiv Detail & Related papers (2020-05-27T05:27:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.