EPSM: A Novel Metric to Evaluate the Safety of Environmental Perception in Autonomous Driving
- URL: http://arxiv.org/abs/2512.15195v1
- Date: Wed, 17 Dec 2025 08:46:49 GMT
- Title: EPSM: A Novel Metric to Evaluate the Safety of Environmental Perception in Autonomous Driving
- Authors: Jörg Gamerdinger, Sven Teufel, Stephan Amann, Lukas Marc Listl, Oliver Bringmann,
- Abstract summary: It is important to evaluate not only the overall performance of perception systems, but also their safety.<n>We introduce a novel safety metric for jointly evaluating the most critical perception tasks, object and lane detection.<n>Our proposed framework integrates a new, lightweight object safety metric that quantifies the potential risk associated with object detection errors.
- Score: 0.5314069314483559
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extensive evaluation of perception systems is crucial for ensuring the safety of intelligent vehicles in complex driving scenarios. Conventional performance metrics such as precision, recall and the F1-score assess the overall detection accuracy, but they do not consider the safety-relevant aspects of perception. Consequently, perception systems that achieve high scores in these metrics may still cause misdetections that could lead to severe accidents. Therefore, it is important to evaluate not only the overall performance of perception systems, but also their safety. We therefore introduce a novel safety metric for jointly evaluating the most critical perception tasks, object and lane detection. Our proposed framework integrates a new, lightweight object safety metric that quantifies the potential risk associated with object detection errors, as well as an lane safety metric including the interdependence between both tasks that can occur in safety evaluation. The resulting combined safety score provides a unified, interpretable measure of perception safety performance. Using the DeepAccident dataset, we demonstrate that our approach identifies safety critical perception errors that conventional performance metrics fail to capture. Our findings emphasize the importance of safety-centric evaluation methods for perception systems in autonomous driving.
Related papers
- Criticality Metrics for Relevance Classification in Safety Evaluation of Object Detection in Automated Driving [0.5701177763922466]
Key component for safety evaluation is the ability to distinguish between relevant and non-relevant objects.<n>This paper presents the first in-depth analysis of criticality metrics for safety evaluation of object detection systems.
arXiv Detail & Related papers (2025-12-17T08:28:53Z) - A Comprehensive Safety Metric to Evaluate Perception in Autonomous Systems [39.67816845884978]
Object perception is the main component of automotive surround sensing.<n>Various metrics already exist for the evaluation of object perception.<n>We propose a new safety metric that incorporates all these parameters and returns a single easily interpretable safety assessment score for object perception.
arXiv Detail & Related papers (2025-12-16T12:53:00Z) - SafeRBench: A Comprehensive Benchmark for Safety Assessment in Large Reasoning Models [60.8821834954637]
We present SafeRBench, the first benchmark that assesses LRM safety end-to-end.<n>We pioneer the incorporation of risk categories and levels into input design.<n>We introduce a micro-thought chunking mechanism to segment long reasoning traces into semantically coherent units.
arXiv Detail & Related papers (2025-11-19T06:46:33Z) - Safely Learning Controlled Stochastic Dynamics [61.82896036131116]
We introduce a method that ensures safe exploration and efficient estimation of system dynamics.<n>After training, the learned model enables predictions of the system's dynamics and permits safety verification of any given control.<n>We provide theoretical guarantees for safety and derive adaptive learning rates that improve with increasing Sobolev regularity of the true dynamics.
arXiv Detail & Related papers (2025-06-03T11:17:07Z) - SafetyAnalyst: Interpretable, Transparent, and Steerable Safety Moderation for AI Behavior [56.10557932893919]
We present SafetyAnalyst, a novel AI safety moderation framework.<n>Given an AI behavior, SafetyAnalyst uses chain-of-thought reasoning to analyze its potential consequences.<n>It aggregates effects into a harmfulness score using 28 fully interpretable weight parameters.
arXiv Detail & Related papers (2024-10-22T03:38:37Z) - LSM: A Comprehensive Metric for Assessing the Safety of Lane Detection Systems in Autonomous Driving [0.5326090003728084]
We propose the Lane Safety Metric (LSM) to evaluate the safety of lane detection systems.
Additional factors such as the semantics of the scene with road type and road width should be considered for the evaluation of lane detection.
We evaluate our offline safety metric on various virtual scenarios using different lane detection approaches and compare it with state-of-the-art performance metrics.
arXiv Detail & Related papers (2024-07-10T15:11:37Z) - A Safety-Adapted Loss for Pedestrian Detection in Automated Driving [13.676179470606844]
In safety-critical domains, errors by the object detector may endanger pedestrians and other vulnerable road users.
We propose a safety-aware loss variation that leverages the estimated per-pedestrian criticality scores during training.
arXiv Detail & Related papers (2024-02-05T13:16:38Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Safety Margins for Reinforcement Learning [53.10194953873209]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - Risk-Driven Design of Perception Systems [47.787943101699966]
It is important that we design perception systems to minimize errors that reduce the overall safety of the system.
We develop a risk-driven approach to designing perception systems that accounts for the effect of perceptual errors on the performance of the fully-integrated, closed-loop system.
We evaluate our techniques on a realistic vision-based aircraft detect and avoid application and show that risk-driven design reduces collision risk by 37% over a baseline system.
arXiv Detail & Related papers (2022-05-21T21:14:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.