A Safety-Adapted Loss for Pedestrian Detection in Automated Driving
- URL: http://arxiv.org/abs/2402.02986v1
- Date: Mon, 5 Feb 2024 13:16:38 GMT
- Title: A Safety-Adapted Loss for Pedestrian Detection in Automated Driving
- Authors: Maria Lyssenko, Piyush Pimplikar, Maarten Bieshaar, Farzad Nozarian,
Rudolph Triebel
- Abstract summary: In safety-critical domains, errors by the object detector may endanger pedestrians and other vulnerable road users.
We propose a safety-aware loss variation that leverages the estimated per-pedestrian criticality scores during training.
- Score: 13.676179470606844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In safety-critical domains like automated driving (AD), errors by the object
detector may endanger pedestrians and other vulnerable road users (VRU). As
common evaluation metrics are not an adequate safety indicator, recent works
employ approaches to identify safety-critical VRU and back-annotate the risk to
the object detector. However, those approaches do not consider the safety
factor in the deep neural network (DNN) training process. Thus,
state-of-the-art DNN penalizes all misdetections equally irrespective of their
criticality. Subsequently, to mitigate the occurrence of critical failure
cases, i.e., false negatives, a safety-aware training strategy might be
required to enhance the detection performance for critical pedestrians. In this
paper, we propose a novel safety-aware loss variation that leverages the
estimated per-pedestrian criticality scores during training. We exploit the
reachability set-based time-to-collision (TTC-RSB) metric from the motion
domain along with distance information to account for the worst-case threat
quantifying the criticality. Our evaluation results using RetinaNet and FCOS on
the nuScenes dataset demonstrate that training the models with our safety-aware
loss function mitigates the misdetection of critical pedestrians without
sacrificing performance for the general case, i.e., pedestrians outside the
safety-critical zone.
Related papers
- Enabling Privacy-Preserving Cyber Threat Detection with Federated Learning [4.475514208635884]
This study systematically profiles the (in)feasibility of learning for privacy-preserving cyber threat detection in terms of effectiveness, byzantine resilience, and efficiency.
It shows that FL-trained detection models can achieve a performance that is comparable to centrally trained counterparts.
Under a realistic threat model, FL turns out to be adversary-resistant to attacks of both data poisoning and model poisoning.
arXiv Detail & Related papers (2024-04-08T01:16:56Z) - A Flow-based Credibility Metric for Safety-critical Pedestrian Detection [16.663568842153065]
Safety is of utmost importance for perception in automated driving (AD)
Standard evaluation schemes utilize safety-agnostic metrics to argue sufficient detection performance.
This paper introduces a novel credibility metric, called c-flow, for pedestrian bounding boxes.
arXiv Detail & Related papers (2024-02-12T13:30:34Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Safety Margins for Reinforcement Learning [53.10194953873209]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - Safe Deep Reinforcement Learning by Verifying Task-Level Properties [84.64203221849648]
Cost functions are commonly employed in Safe Deep Reinforcement Learning (DRL)
The cost is typically encoded as an indicator function due to the difficulty of quantifying the risk of policy decisions in the state space.
In this paper, we investigate an alternative approach that uses domain knowledge to quantify the risk in the proximity of such states by defining a violation metric.
arXiv Detail & Related papers (2023-02-20T15:24:06Z) - A Certifiable Security Patch for Object Tracking in Self-Driving Systems
via Historical Deviation Modeling [22.753164675538457]
We present the first systematic research on the security of object tracking in self-driving cars.
We prove the mainstream multi-object tracker (MOT) based on Kalman Filter (KF) is unsafe even with an enabled multi-sensor fusion mechanism.
We propose a simple yet effective security patch for KF-based MOT, the core of which is an adaptive strategy to balance the focus of KF on observations and predictions.
arXiv Detail & Related papers (2022-07-18T12:30:24Z) - Network-level Safety Metrics for Overall Traffic Safety Assessment: A
Case Study [7.8191100993403495]
This paper defines a new set of network-level safety metrics for the overall safety assessment of traffic flow by processing imagery taken by roadside infrastructure sensors.
An integrative analysis of the safety metrics and crash data reveals the insightful temporal and spatial correlation between the representative network-level safety metrics and the crash frequency.
arXiv Detail & Related papers (2022-01-27T19:07:08Z) - Towards a Safety Case for Hardware Fault Tolerance in Convolutional
Neural Networks Using Activation Range Supervision [1.7968112116887602]
Convolutional neural networks (CNNs) have become an established part of numerous safety-critical computer vision applications.
We build a prototypical safety case for CNNs by demonstrating that range supervision represents a highly reliable fault detector.
We explore novel, non-uniform range restriction methods that effectively suppress the probability of silent data corruptions and uncorrectable errors.
arXiv Detail & Related papers (2021-08-16T11:13:55Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Out-of-Distribution Detection for Automotive Perception [58.34808836642603]
Neural networks (NNs) are widely used for object classification in autonomous driving.
NNs can fail on input data not well represented by the training dataset, known as out-of-distribution (OOD) data.
This paper presents a method for determining whether inputs are OOD, which does not require OOD data during training and does not increase the computational cost of inference.
arXiv Detail & Related papers (2020-11-03T01:46:35Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.