Verifiable Obstacle Detection
- URL: http://arxiv.org/abs/2208.14403v1
- Date: Tue, 30 Aug 2022 17:15:35 GMT
- Title: Verifiable Obstacle Detection
- Authors: Ayoosh Bansal, Hunmin Kim, Simon Yu, Bo Li, Naira Hovakimyan, Marco
Caccamo and Lui Sha
- Abstract summary: We present a safety verification of an existing LiDAR based classical obstacle detection algorithm.
We provide a rigorous analysis of the obstacle detection system with empirical results based on real-world sensor data.
- Score: 10.277825331268179
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Perception of obstacles remains a critical safety concern for autonomous
vehicles. Real-world collisions have shown that the autonomy faults leading to
fatal collisions originate from obstacle existence detection. Open source
autonomous driving implementations show a perception pipeline with complex
interdependent Deep Neural Networks. These networks are not fully verifiable,
making them unsuitable for safety-critical tasks.
In this work, we present a safety verification of an existing LiDAR based
classical obstacle detection algorithm. We establish strict bounds on the
capabilities of this obstacle detection algorithm. Given safety standards, such
bounds allow for determining LiDAR sensor properties that would reliably
satisfy the standards. Such analysis has as yet been unattainable for neural
network based perception systems. We provide a rigorous analysis of the
obstacle detection system with empirical results based on real-world sensor
data.
Related papers
- Automatic AI controller that can drive with confidence: steering vehicle with uncertainty knowledge [3.131134048419781]
This research focuses on the development of a vehicle's lateral control system using a machine learning framework.
We employ a Bayesian Neural Network (BNN), a probabilistic learning model, to address uncertainty quantification.
By establishing a confidence threshold, we can trigger manual intervention, ensuring that control is relinquished from the algorithm when it operates outside of safe parameters.
arXiv Detail & Related papers (2024-04-24T23:22:37Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - A Safety-Adapted Loss for Pedestrian Detection in Automated Driving [13.676179470606844]
In safety-critical domains, errors by the object detector may endanger pedestrians and other vulnerable road users.
We propose a safety-aware loss variation that leverages the estimated per-pedestrian criticality scores during training.
arXiv Detail & Related papers (2024-02-05T13:16:38Z) - STARNet: Sensor Trustworthiness and Anomaly Recognition via Approximated
Likelihood Regret for Robust Edge Autonomy [0.5310810820034502]
Complex sensors such as LiDAR, RADAR, and event cameras have proliferated in autonomous robotics.
These sensors are vulnerable to diverse failure mechanisms that can intricately interact with their operation environment.
This paper introduces STARNet, a Sensor Trustworthiness and Anomaly Recognition Network designed to detect untrustworthy sensor streams.
arXiv Detail & Related papers (2023-09-20T02:20:11Z) - Unsupervised Adaptation from Repeated Traversals for Autonomous Driving [54.59577283226982]
Self-driving cars must generalize to the end-user's environment to operate reliably.
One potential solution is to leverage unlabeled data collected from the end-users' environments.
There is no reliable signal in the target domain to supervise the adaptation process.
We show that this simple additional assumption is sufficient to obtain a potent signal that allows us to perform iterative self-training of 3D object detectors on the target domain.
arXiv Detail & Related papers (2023-03-27T15:07:55Z) - Safe Perception -- A Hierarchical Monitor Approach [0.0]
We propose a novel hierarchical monitoring approach for AI-based perception systems.
It can reliably detect detection misses, and at the same time has a very low false alarm rate.
arXiv Detail & Related papers (2022-08-01T13:09:24Z) - A Certifiable Security Patch for Object Tracking in Self-Driving Systems
via Historical Deviation Modeling [22.753164675538457]
We present the first systematic research on the security of object tracking in self-driving cars.
We prove the mainstream multi-object tracker (MOT) based on Kalman Filter (KF) is unsafe even with an enabled multi-sensor fusion mechanism.
We propose a simple yet effective security patch for KF-based MOT, the core of which is an adaptive strategy to balance the focus of KF on observations and predictions.
arXiv Detail & Related papers (2022-07-18T12:30:24Z) - Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving [100.57791628642624]
We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
arXiv Detail & Related papers (2022-03-04T16:14:33Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.