Sensor Visibility Estimation: Metrics and Methods for Systematic
Performance Evaluation and Improvement
- URL: http://arxiv.org/abs/2211.06308v1
- Date: Fri, 11 Nov 2022 16:17:43 GMT
- Title: Sensor Visibility Estimation: Metrics and Methods for Systematic
Performance Evaluation and Improvement
- Authors: Joachim B\"orger, Marc Patrick Zapf, Marat Kopytjuk, Xinrun Li 2, and
Claudius Gl\"aser
- Abstract summary: We introduce metrics and a framework to assess the performance of visibility estimators.
Our metrics are verified with labeled real-world and simulation data from infrastructure radars and cameras.
Applying our metrics, we enhance the radar and camera visibility estimators by modeling the 3D elevation of sensor and objects.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sensor visibility is crucial for safety-critical applications in automotive,
robotics, smart infrastructure and others: In addition to object detection and
occupancy mapping, visibility describes where a sensor can potentially measure
or is blind. This knowledge can enhance functional safety and perception
algorithms or optimize sensor topologies.
Despite its significance, to the best of our knowledge, neither a common
definition of visibility nor performance metrics exist yet. We close this gap
and provide a definition of visibility, derived from a use case review. We
introduce metrics and a framework to assess the performance of visibility
estimators.
Our metrics are verified with labeled real-world and simulation data from
infrastructure radars and cameras: The framework easily identifies false
visible or false invisible estimations which are safety-critical.
Applying our metrics, we enhance the radar and camera visibility estimators
by modeling the 3D elevation of sensor and objects. This refinement outperforms
the conventional planar 2D approach in trustfulness and thus safety.
Related papers
- Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - A Quality Index Metric and Method for Online Self-Assessment of
Autonomous Vehicles Sensory Perception [164.93739293097605]
We propose a novel evaluation metric, named as the detection quality index (DQI), which assesses the performance of camera-based object detection algorithms.
We have developed a superpixel-based attention network (SPA-NET) that utilizes raw image pixels and superpixels as input to predict the proposed DQI evaluation metric.
arXiv Detail & Related papers (2022-03-04T22:16:50Z) - Exploring Credibility Scoring Metrics of Perception Systems for
Autonomous Driving [0.0]
We show that offline metrics can be used to account for real-world corruptions such as poor weather conditions.
This is a clear next step as it can allow for error-free autonomous vehicle perception and safer time-critical and safety-critical decision-making.
arXiv Detail & Related papers (2021-12-22T03:17:14Z) - DMRVisNet: Deep Multi-head Regression Network for Pixel-wise Visibility
Estimation Under Foggy Weather [0.0]
Fog, as a kind of common weather, frequently appears in the real world, especially in the mountain areas.
Current methods use professional instruments outfitted at fixed locations on the roads to perform the visibility measurement.
We propose an innovative end-to-end convolutional neural network framework to estimate the visibility.
arXiv Detail & Related papers (2021-12-08T13:31:07Z) - Visual Sensor Pose Optimisation Using Rendering-based Visibility Models
for Robust Cooperative Perception [4.5144287492490625]
Visual Sensor Networks can be used in a variety of perception applications such as infrastructure support for autonomous driving in complex road segments.
The pose of the sensors in such networks directly determines the coverage of the environment and objects therein.
This paper proposes two novel sensor pose optimisation methods, based on gradient-ascent and Programming techniques.
arXiv Detail & Related papers (2021-06-09T18:02:32Z) - Unadversarial Examples: Designing Objects for Robust Vision [100.4627585672469]
We develop a framework that exploits the sensitivity of modern machine learning algorithms to input perturbations in order to design "robust objects"
We demonstrate the efficacy of the framework on a wide variety of vision-based tasks ranging from standard benchmarks to (in-simulation) robotics.
arXiv Detail & Related papers (2020-12-22T18:26:07Z) - VATLD: A Visual Analytics System to Assess, Understand and Improve
Traffic Light Detection [15.36267013724161]
We propose a visual analytics system, VATLD, to assess, understand, and improve the accuracy and robustness of traffic light detectors in autonomous driving applications.
The disentangled representation learning extracts data semantics to augment human cognition with human-friendly visual summarization.
We also demonstrate the effectiveness of various performance improvement strategies with our visual analytics system, VATLD, and illustrate some practical implications for safety-critical applications in autonomous driving.
arXiv Detail & Related papers (2020-09-27T22:39:00Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.