Hindsight is 20/20: Leveraging Past Traversals to Aid 3D Perception
- URL: http://arxiv.org/abs/2203.11405v1
- Date: Tue, 22 Mar 2022 00:58:27 GMT
- Title: Hindsight is 20/20: Leveraging Past Traversals to Aid 3D Perception
- Authors: Yurong You, Katie Z Luo, Xiangyu Chen, Junan Chen, Wei-Lun Chao, Wen
Sun, Bharath Hariharan, Mark Campbell, Kilian Q. Weinberger
- Abstract summary: Small, far-away, or highly occluded objects are particularly challenging because there is limited information in the LiDAR point clouds for detecting them.
We propose a novel, end-to-end trainable Hindsight framework to extract contextual information from past data.
We show that this framework is compatible with most modern 3D detection architectures and can substantially improve their average precision on multiple autonomous driving datasets.
- Score: 59.2014692323323
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-driving cars must detect vehicles, pedestrians, and other traffic
participants accurately to operate safely. Small, far-away, or highly occluded
objects are particularly challenging because there is limited information in
the LiDAR point clouds for detecting them. To address this challenge, we
leverage valuable information from the past: in particular, data collected in
past traversals of the same scene. We posit that these past data, which are
typically discarded, provide rich contextual information for disambiguating the
above-mentioned challenging cases. To this end, we propose a novel, end-to-end
trainable Hindsight framework to extract this contextual information from past
traversals and store it in an easy-to-query data structure, which can then be
leveraged to aid future 3D object detection of the same scene. We show that
this framework is compatible with most modern 3D detection architectures and
can substantially improve their average precision on multiple autonomous
driving datasets, most notably by more than 300% on the challenging cases.
Related papers
- InScope: A New Real-world 3D Infrastructure-side Collaborative Perception Dataset for Open Traffic Scenarios [13.821143687548494]
This paper introduces a new 3D infrastructure-side collaborative perception dataset, abbreviated as inscope.
InScope encapsulates a 20-day capture duration with 303 tracking trajectories and 187,787 3D bounding boxes annotated by experts.
arXiv Detail & Related papers (2024-07-31T13:11:14Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Generalized Few-Shot 3D Object Detection of LiDAR Point Cloud for
Autonomous Driving [91.39625612027386]
We propose a novel task, called generalized few-shot 3D object detection, where we have a large amount of training data for common (base) objects, but only a few data for rare (novel) classes.
Specifically, we analyze in-depth differences between images and point clouds, and then present a practical principle for the few-shot setting in the 3D LiDAR dataset.
To solve this task, we propose an incremental fine-tuning method to extend existing 3D detection models to recognize both common and rare objects.
arXiv Detail & Related papers (2023-02-08T07:11:36Z) - Ithaca365: Dataset and Driving Perception under Repeated and Challenging
Weather Conditions [0.0]
We present a new dataset to enable robust autonomous driving via a novel data collection process.
The dataset includes images and point clouds from cameras and LiDAR sensors, along with high-precision GPS/INS.
We demonstrate the uniqueness of this dataset by analyzing the performance of baselines in amodal segmentation of road and objects.
arXiv Detail & Related papers (2022-08-01T22:55:32Z) - Weakly Supervised Training of Monocular 3D Object Detectors Using Wide
Baseline Multi-view Traffic Camera Data [19.63193201107591]
7DoF prediction of vehicles at an intersection is an important task for assessing potential conflicts between road users.
We develop an approach using a weakly supervised method of fine tuning 3D object detectors for traffic observation cameras.
Our method achieves vehicle 7DoF pose prediction accuracy on our dataset comparable to the top performing monocular 3D object detectors on autonomous vehicle datasets.
arXiv Detail & Related papers (2021-10-21T08:26:48Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - 3D Object Detection for Autonomous Driving: A Survey [14.772968858398043]
3D object detection serves as the core basis of such perception system.
Despite existing efforts, 3D object detection on point clouds is still in its infancy.
Recent state-of-the-art detection methods with their pros and cons are presented.
arXiv Detail & Related papers (2021-06-21T03:17:20Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.