Evaluating Roadside Perception for Autonomous Vehicles: Insights from
Field Testing
- URL: http://arxiv.org/abs/2401.12392v1
- Date: Mon, 22 Jan 2024 22:47:02 GMT
- Title: Evaluating Roadside Perception for Autonomous Vehicles: Insights from
Field Testing
- Authors: Rusheng Zhang, Depu Meng, Shengyin Shen, Tinghan Wang, Tai Karir,
Michael Maile, Henry X. Liu
- Abstract summary: This paper introduces a comprehensive evaluation methodology specifically designed to assess the performance of roadside perception systems.
Our methodology encompasses measurement techniques, metric selection, and experimental trial design, all grounded in real-world field testing.
The findings of this study are poised to inform the development of industry-standard benchmarks and evaluation methods.
- Score: 7.755003755937953
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Roadside perception systems are increasingly crucial in enhancing traffic
safety and facilitating cooperative driving for autonomous vehicles. Despite
rapid technological advancements, a major challenge persists for this newly
arising field: the absence of standardized evaluation methods and benchmarks
for these systems. This limitation hampers the ability to effectively assess
and compare the performance of different systems, thus constraining progress in
this vital field. This paper introduces a comprehensive evaluation methodology
specifically designed to assess the performance of roadside perception systems.
Our methodology encompasses measurement techniques, metric selection, and
experimental trial design, all grounded in real-world field testing to ensure
the practical applicability of our approach.
We applied our methodology in Mcity\footnote{\url{https://mcity.umich.edu/}},
a controlled testing environment, to evaluate various off-the-shelf perception
systems. This approach allowed for an in-depth comparative analysis of their
performance in realistic scenarios, offering key insights into their respective
strengths and limitations. The findings of this study are poised to inform the
development of industry-standard benchmarks and evaluation methods, thereby
enhancing the effectiveness of roadside perception system development and
deployment for autonomous vehicles. We anticipate that this paper will
stimulate essential discourse on standardizing evaluation methods for roadside
perception systems, thus pushing the frontiers of this technology. Furthermore,
our results offer both academia and industry a comprehensive understanding of
the capabilities of contemporary infrastructure-based perception systems.
Related papers
- Acceleration method for generating perception failure scenarios based on editing Markov process [0.0]
This study proposes an accelerated generation method for perception failure scenarios tailored to the underground parking garage environment.
The method generates an intelligent testing environment with a high density of perception failure scenarios.
It edits the Markov process within the perception failure scenario data to increase the density of critical information in the training data.
arXiv Detail & Related papers (2024-07-01T05:33:48Z) - The RoboDrive Challenge: Drive Anytime Anywhere in Any Condition [136.32656319458158]
The 2024 RoboDrive Challenge was crafted to propel the development of driving perception technologies.
This year's challenge consisted of five distinct tracks and attracted 140 registered teams from 93 institutes across 11 countries.
The competition culminated in 15 top-performing solutions.
arXiv Detail & Related papers (2024-05-14T17:59:57Z) - Machine Learning for Autonomous Vehicle's Trajectory Prediction: A
comprehensive survey, Challenges, and Future Research Directions [3.655021726150368]
We have examined over two hundred studies related to trajectory prediction in the context of AVs.
This review conducts a comprehensive evaluation of several deep learning-based techniques.
By identifying challenges in the existing literature and outlining potential research directions, this review significantly contributes to the advancement of knowledge in the domain of AV trajectory prediction.
arXiv Detail & Related papers (2023-07-12T10:20:19Z) - Better Understanding Differences in Attribution Methods via Systematic Evaluations [57.35035463793008]
Post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions.
We propose three novel evaluation schemes to more reliably measure the faithfulness of those methods.
We use these evaluation schemes to study strengths and shortcomings of some widely used attribution methods over a wide range of models.
arXiv Detail & Related papers (2023-03-21T14:24:58Z) - Towards trustworthy multi-modal motion prediction: Holistic evaluation
and interpretability of outputs [3.5240925434839054]
We focus on evaluation criteria, robustness, and interpretability of outputs.
We propose an intent prediction layer that can be attached to multi-modal motion prediction models.
arXiv Detail & Related papers (2022-10-28T14:14:22Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Towards Better Understanding Attribution Methods [77.1487219861185]
Post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions.
We propose three novel evaluation schemes to more reliably measure the faithfulness of those methods.
We also propose a post-processing smoothing step that significantly improves the performance of some attribution methods.
arXiv Detail & Related papers (2022-05-20T20:50:17Z) - On Robustness of Lane Detection Models to Physical-World Adversarial
Attacks in Autonomous Driving [12.412448947321828]
After the 2017 TuSimple Lane Detection Challenge, its evaluation based on accuracy and F1 score has become the de facto standard to measure the performance of lane detection methods.
We conduct the first large-scale empirical study to evaluate the robustness of state-of-the-art lane detection methods under physical-world adversarial attacks in autonomous driving.
arXiv Detail & Related papers (2021-07-06T09:04:47Z) - PONE: A Novel Automatic Evaluation Metric for Open-Domain Generative
Dialogue Systems [48.99561874529323]
There are three kinds of automatic methods to evaluate the open-domain generative dialogue systems.
Due to the lack of systematic comparison, it is not clear which kind of metrics are more effective.
We propose a novel and feasible learning-based metric that can significantly improve the correlation with human judgments.
arXiv Detail & Related papers (2020-04-06T04:36:33Z) - Interpretable Off-Policy Evaluation in Reinforcement Learning by
Highlighting Influential Transitions [48.91284724066349]
Off-policy evaluation in reinforcement learning offers the chance of using observational data to improve future outcomes in domains such as healthcare and education.
Traditional measures such as confidence intervals may be insufficient due to noise, limited data and confounding.
We develop a method that could serve as a hybrid human-AI system, to enable human experts to analyze the validity of policy evaluation estimates.
arXiv Detail & Related papers (2020-02-10T00:26:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.