Using simulation to quantify the performance of automotive perception
systems
- URL: http://arxiv.org/abs/2303.00983v1
- Date: Thu, 2 Mar 2023 05:28:35 GMT
- Title: Using simulation to quantify the performance of automotive perception
systems
- Authors: Zhenyi Liu, Devesh Shah, Alireza Rahimpour, Devesh Upadhyay, Joyce
Farrell, Brian A Wandell
- Abstract summary: We describe the image system simulation software tools that we use to evaluate the performance of image systems for object (automobile) detection.
We quantified system performance by measuring average precision and we report a trend relating system resolution and object detection performance.
- Score: 2.2320512724449233
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The design and evaluation of complex systems can benefit from a software
simulation - sometimes called a digital twin. The simulation can be used to
characterize system performance or to test its performance under conditions
that are difficult to measure (e.g., nighttime for automotive perception
systems). We describe the image system simulation software tools that we use to
evaluate the performance of image systems for object (automobile) detection. We
describe experiments with 13 different cameras with a variety of optics and
pixel sizes. To measure the impact of camera spatial resolution, we designed a
collection of driving scenes that had cars at many different distances. We
quantified system performance by measuring average precision and we report a
trend relating system resolution and object detection performance. We also
quantified the large performance degradation under nighttime conditions,
compared to daytime, for all cameras and a COCO pre-trained network.
Related papers
- Autobiasing Event Cameras [0.932065750652415]
This paper utilizes the neuromorphic YOLO-based face tracking module of a driver monitoring system as the event-based application to study.
The proposed method uses numerical metrics to continuously monitor the performance of the event-based application in real-time.
The advantage of bias optimization lies in its ability to handle conditions such as flickering or darkness without requiring additional hardware or software.
arXiv Detail & Related papers (2024-11-01T16:41:05Z) - Anticipating Driving Behavior through Deep Learning-Based Policy
Prediction [66.344923925939]
We developed a system that processes integrated visual features derived from video frames captured by a regular camera, along with depth details obtained from a point cloud scanner.
This system is designed to anticipate driving actions, encompassing both vehicle speed and steering angle.
Our evaluations indicate that the forecasts achieve a noteworthy level of accuracy in a minimum of half the test scenarios.
arXiv Detail & Related papers (2023-07-20T17:38:55Z) - Online Camera-to-ground Calibration for Autonomous Driving [26.357898919134833]
We propose an online monocular camera-to-ground calibration solution that does not utilize any specific targets while driving.
We provide metrics to quantify calibration performance and stopping criteria to report/broadcast our satisfying calibration results.
arXiv Detail & Related papers (2023-03-30T04:01:48Z) - A Feature-based Approach for the Recognition of Image Quality
Degradation in Automotive Applications [0.0]
This paper presents a feature-based algorithm to detect certain effects that can degrade image quality in automotive applications.
Experiments with different data sets show that the algorithm can detect soiling adhering to camera lenses and classify different types of image degradation.
arXiv Detail & Related papers (2023-03-13T13:40:09Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking [58.95210121654722]
We propose a real-time city-scale multi-camera vehicle tracking system that handles real-world, low-resolution CCTV instead of idealized and curated video streams.
Our method is ranked among the top five performers on the public leaderboard.
arXiv Detail & Related papers (2022-04-15T12:47:01Z) - A Quality Index Metric and Method for Online Self-Assessment of
Autonomous Vehicles Sensory Perception [164.93739293097605]
We propose a novel evaluation metric, named as the detection quality index (DQI), which assesses the performance of camera-based object detection algorithms.
We have developed a superpixel-based attention network (SPA-NET) that utilizes raw image pixels and superpixels as input to predict the proposed DQI evaluation metric.
arXiv Detail & Related papers (2022-03-04T22:16:50Z) - Worsening Perception: Real-time Degradation of Autonomous Vehicle
Perception Performance for Simulation of Adverse Weather Conditions [47.529411576737644]
This study explores the potential of using a simple, lightweight image augmentation system in an autonomous racing vehicle.
With minimal adjustment, the prototype system can replicate the effects of both water droplets on the camera lens, and fading light conditions.
arXiv Detail & Related papers (2021-03-03T23:49:02Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z) - Event-based Stereo Visual Odometry [42.77238738150496]
We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig.
We seek to maximize thetemporal consistency of stereo event-based data while using a simple and efficient representation.
arXiv Detail & Related papers (2020-07-30T15:53:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.