On the Robustness Evaluation of 3D Obstacle Detection Against Specifications in Autonomous Driving
- URL: http://arxiv.org/abs/2408.13653v2
- Date: Tue, 14 Oct 2025 16:15:29 GMT
- Title: On the Robustness Evaluation of 3D Obstacle Detection Against Specifications in Autonomous Driving
- Authors: Tri Minh Triet Pham, Bo Yang, Jinqiu Yang,
- Abstract summary: robustness of 3D obstacle detection models against specification-based perturbations remains unevaluated.<n>We apply ET to evaluate the robustness of five classic 3D obstacle detection models, including one from an industry-grade Level 4 ADS.<n>We find that even very subtle changes in the PCD (i.e., removing two points) may introduce a non-trivial decrease in safety of objects.
- Score: 5.013456653983232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous driving systems (ADSs) rely on real-time sensor data, such as cameras and LiDARs, for time-critical decisions using deep neural networks. The accuracy of these decisions is crucial for the widespread adoption of ADSs, as errors can have serious consequences. 3D obstacle detection, in particular, is sensitive to point cloud data (PCD) noise from various sources. However, the robustness of current 3D obstacle detection models against specification-based perturbations remains unevaluated. These perturbations are derived from the specification of LiDAR sensors and previous research on LiDAR's ability to capture objects of different colors and materials. They can manifest as very subtle sensor-based noises or obstacle-specific perturbations. Hence, we propose SORBET, a framework that tests the robustness of 3D obstacle detection models in ADS against such perturbations to the PCD to evaluate their robustness. We applied SORBET to evaluate the robustness of five classic 3D obstacle detection models, including one from an industry-grade Level 4 ADS (Baidu's Apollo). Furthermore, we studied how the deviated obstacle detection results would propagate and negatively impact trajectory prediction. Our evaluation emphasizes the importance of testing 3D obstacle detection against specification-based perturbations. We find that even very subtle changes in the PCD (i.e., removing two points) may introduce a non-trivial decrease in the detection performance. Furthermore, such a negative impact will further propagate to other modules and endanger the safety of the ADS.
Related papers
- A Comparative Study of 3D Person Detection: Sensor Modalities and Robustness in Diverse Indoor and Outdoor Environments [5.89179309980335]
This work presents a systematic evaluation of 3D person detection using camera-only, LiDAR-only, and camera-LiDAR fusion.<n>We compare three representative models - BEVDepth (camera), PointPillars (LiDAR), and DAL (camera-LiDAR fusion)<n>Our results show that the fusion-based approach consistently outperforms single-modality models, particularly in challenging scenarios.
arXiv Detail & Related papers (2026-02-05T10:53:35Z) - Testing the Fault-Tolerance of Multi-Sensor Fusion Perception in Autonomous Driving Systems [14.871090150807929]
We build fault models for cameras and LiDAR in AVs and inject them into the MSF perception-based ADS to test its behaviors in test scenarios.
We design a feedback-guided differential fuzzer to discover the safety violations of MSF perception-based ADS caused by the injected sensor faults.
arXiv Detail & Related papers (2025-04-18T02:37:55Z) - Hi-ALPS -- An Experimental Robustness Quantification of Six LiDAR-based Object Detection Systems for Autonomous Driving [49.64902130083662]
3D object detection systems (OD) play a key role in the driving decisions of autonomous vehicles.
Adversarial examples are small, sometimes sophisticated perturbations in the input data that change, i.e. falsify, the prediction of the OD.
We quantify the robustness of six state-of-the-art 3D OD under different types of perturbations.
arXiv Detail & Related papers (2025-03-21T14:17:02Z) - Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - CatFree3D: Category-agnostic 3D Object Detection with Diffusion [63.75470913278591]
We introduce a novel pipeline that decouples 3D detection from 2D detection and depth prediction.
We also introduce the Normalised Hungarian Distance (NHD) metric for an accurate evaluation of 3D detection results.
arXiv Detail & Related papers (2024-08-22T22:05:57Z) - Integrity Monitoring of 3D Object Detection in Automated Driving Systems using Raw Activation Patterns and Spatial Filtering [12.384452095533396]
The deep neural network (DNN) models are widely used for object detection in automated driving systems (ADS)
Yet, such models are prone to errors which can have serious safety implications.
Introspection and self-assessment models that aim to detect such errors are therefore of paramount importance for the safe deployment of ADS.
arXiv Detail & Related papers (2024-05-13T10:03:03Z) - Towards Robust 3D Object Detection In Rainy Conditions [10.920640666237833]
We propose a framework for improving the robustness of LiDAR-based 3D object detectors against road spray.
Our approach uses a state-of-the-art adverse weather detection network to filter out spray from the LiDAR point cloud.
In addition to adverse weather filtering, we explore the use of radar targets to further filter false positive detections.
arXiv Detail & Related papers (2023-10-02T07:34:15Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Ego-Motion Estimation and Dynamic Motion Separation from 3D Point Clouds
for Accumulating Data and Improving 3D Object Detection [0.1474723404975345]
One of high-resolution radar sensors, compared to lidar sensors, is the sparsity of the generated point cloud.
This contribution analyzes limitations of accumulating radar point clouds on the View-of-Delft dataset.
Experiments document an improved object detection performance by applying an ego-motion estimation and dynamic motion correction approach.
arXiv Detail & Related papers (2023-08-29T14:53:16Z) - Context-Aware Change Detection With Semi-Supervised Learning [0.0]
Change detection using earth observation data plays a vital role in quantifying the impact of disasters in affected areas.
Data sources like Sentinel-2 provide rich optical information, but are often hindered by cloud cover.
We develop a model to assess the contribution of pre-disaster Sentinel-2 data in change detection tasks.
arXiv Detail & Related papers (2023-06-15T08:17:49Z) - Survey on LiDAR Perception in Adverse Weather Conditions [6.317642241067219]
The active LiDAR sensor is able to create an accurate 3D representation of a scene.
The LiDAR's performance change under adverse weather conditions like fog, snow or rain.
We address topics such as the availability of appropriate data, raw point cloud processing and denoising, robust perception algorithms and sensor fusion to mitigate adverse weather induced shortcomings.
arXiv Detail & Related papers (2023-04-13T07:45:23Z) - On the Importance of Accurate Geometry Data for Dense 3D Vision Tasks [61.74608497496841]
Training on inaccurate or corrupt data induces model bias and hampers generalisation capabilities.
This paper investigates the effect of sensor errors for the dense 3D vision tasks of depth estimation and reconstruction.
arXiv Detail & Related papers (2023-03-26T22:32:44Z) - Uncertainty-Aware AB3DMOT by Variational 3D Object Detection [74.8441634948334]
Uncertainty estimation is an effective tool to provide statistically accurate predictions.
In this paper, we propose a Variational Neural Network-based TANet 3D object detector to generate 3D object detections with uncertainty.
arXiv Detail & Related papers (2023-02-12T14:30:03Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - Uncertainty-Aware Voxel based 3D Object Detection and Tracking with
von-Mises Loss [13.346392746224117]
Uncertainty helps us tackle the error in the perception system and improve robustness.
We propose a method for improving target tracking performance by adding uncertainty regression to the SECOND detector.
arXiv Detail & Related papers (2020-11-04T21:53:31Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.