Learning to Detect Mobile Objects from LiDAR Scans Without Labels
- URL: http://arxiv.org/abs/2203.15882v1
- Date: Tue, 29 Mar 2022 20:05:24 GMT
- Title: Learning to Detect Mobile Objects from LiDAR Scans Without Labels
- Authors: Yurong You, Katie Z Luo, Cheng Perng Phoo, Wei-Lun Chao, Wen Sun,
Bharath Hariharan, Mark Campbell, Kilian Q. Weinberger
- Abstract summary: Current 3D object detectors for autonomous driving are almost entirely trained on human-annotated data.
This paper proposes an alternative approach based on unlabeled data, which can be collected cheaply and in abundance almost everywhere on earth.
- Score: 60.49869345286879
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current 3D object detectors for autonomous driving are almost entirely
trained on human-annotated data. Although of high quality, the generation of
such data is laborious and costly, restricting them to a few specific locations
and object types. This paper proposes an alternative approach entirely based on
unlabeled data, which can be collected cheaply and in abundance almost
everywhere on earth. Our approach leverages several simple common sense
heuristics to create an initial set of approximate seed labels. For example,
relevant traffic participants are generally not persistent across multiple
traversals of the same route, do not fly, and are never under ground. We
demonstrate that these seed labels are highly effective to bootstrap a
surprisingly accurate detector through repeated self-training without a single
human annotated label.
Related papers
- MixSup: Mixed-grained Supervision for Label-efficient LiDAR-based 3D
Object Detection [59.1417156002086]
MixSup is a more practical paradigm simultaneously utilizing massive cheap coarse labels and a limited number of accurate labels for Mixed-grained Supervision.
MixSup achieves up to 97.31% of fully supervised performance, using cheap cluster annotations and only 10% box annotations.
arXiv Detail & Related papers (2024-01-29T17:05:19Z) - Toward unlabeled multi-view 3D pedestrian detection by generalizable AI:
techniques and performance analysis [7.414308466976969]
Generalizable AI can be used to improve multi-view 3D pedestrian detection in unlabeled target scenes.
We investigate two approaches for automatically labeling target data: pseudo-labeling using a supervised detector and automatic labeling using an untrained detector.
We show that, by using the automatic labeling approach based on an untrained detector, we can obtain superior results than directly using the untrained detector or a detector trained with an existing labeled source dataset.
arXiv Detail & Related papers (2023-08-08T18:24:53Z) - Robust Assignment of Labels for Active Learning with Sparse and Noisy
Annotations [0.17188280334580192]
Supervised classification algorithms are used to solve a growing number of real-life problems around the globe.
Unfortunately, acquiring good-quality annotations for many tasks is infeasible or too expensive to be done in practice.
We propose two novel annotation unification algorithms that utilize unlabeled parts of the sample space.
arXiv Detail & Related papers (2023-07-25T19:40:41Z) - Unsupervised Adaptation from Repeated Traversals for Autonomous Driving [54.59577283226982]
Self-driving cars must generalize to the end-user's environment to operate reliably.
One potential solution is to leverage unlabeled data collected from the end-users' environments.
There is no reliable signal in the target domain to supervise the adaptation process.
We show that this simple additional assumption is sufficient to obtain a potent signal that allows us to perform iterative self-training of 3D object detectors on the target domain.
arXiv Detail & Related papers (2023-03-27T15:07:55Z) - Generalized Few-Shot 3D Object Detection of LiDAR Point Cloud for
Autonomous Driving [91.39625612027386]
We propose a novel task, called generalized few-shot 3D object detection, where we have a large amount of training data for common (base) objects, but only a few data for rare (novel) classes.
Specifically, we analyze in-depth differences between images and point clouds, and then present a practical principle for the few-shot setting in the 3D LiDAR dataset.
To solve this task, we propose an incremental fine-tuning method to extend existing 3D detection models to recognize both common and rare objects.
arXiv Detail & Related papers (2023-02-08T07:11:36Z) - Learning from Multiple Annotator Noisy Labels via Sample-wise Label
Fusion [17.427778867371153]
In some real-world applications, accurate labeling might not be viable.
Multiple noisy labels are provided by several annotators for each data sample.
arXiv Detail & Related papers (2022-07-22T20:38:20Z) - AutoGeoLabel: Automated Label Generation for Geospatial Machine Learning [69.47585818994959]
We evaluate a big data processing pipeline to auto-generate labels for remote sensing data.
We utilize the big geo-data platform IBM PAIRS to dynamically generate such labels in dense urban areas.
arXiv Detail & Related papers (2022-01-31T20:02:22Z) - Adversarial Knowledge Transfer from Unlabeled Data [62.97253639100014]
We present a novel Adversarial Knowledge Transfer framework for transferring knowledge from internet-scale unlabeled data to improve the performance of a classifier.
An important novel aspect of our method is that the unlabeled source data can be of different classes from those of the labeled target data, and there is no need to define a separate pretext task.
arXiv Detail & Related papers (2020-08-13T08:04:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.