FLAVA: Find, Localize, Adjust and Verify to Annotate LiDAR-Based Point
Clouds
- URL: http://arxiv.org/abs/2011.10174v1
- Date: Fri, 20 Nov 2020 02:22:36 GMT
- Title: FLAVA: Find, Localize, Adjust and Verify to Annotate LiDAR-Based Point
Clouds
- Authors: Tai Wang, Conghui He, Zhe Wang, Jianping Shi, Dahua Lin
- Abstract summary: We propose FLAVA, a systematic approach to minimizing human interaction in the annotation process.
Specifically, we divide the annotation pipeline into four parts: find, localize, adjust and verify.
Our system also greatly reduces the amount of interaction by introducing a light-weight yet effective mechanism to propagate the results.
- Score: 93.3595555830426
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent years have witnessed the rapid progress of perception algorithms on
top of LiDAR, a widely adopted sensor for autonomous driving systems. These
LiDAR-based solutions are typically data hungry, requiring a large amount of
data to be labeled for training and evaluation. However, annotating this kind
of data is very challenging due to the sparsity and irregularity of point
clouds and more complex interaction involved in this procedure. To tackle this
problem, we propose FLAVA, a systematic approach to minimizing human
interaction in the annotation process. Specifically, we divide the annotation
pipeline into four parts: find, localize, adjust and verify. In addition, we
carefully design the UI for different stages of the annotation procedure, thus
keeping the annotators to focus on the aspects that are most important to each
stage. Furthermore, our system also greatly reduces the amount of interaction
by introducing a light-weight yet effective mechanism to propagate the
annotation results. Experimental results show that our method can remarkably
accelerate the procedure and improve the annotation quality.
Related papers
- Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Semi-supervised Open-World Object Detection [74.95267079505145]
We introduce a more realistic formulation, named semi-supervised open-world detection (SS-OWOD)
We demonstrate that the performance of the state-of-the-art OWOD detector dramatically deteriorates in the proposed SS-OWOD setting.
Our experiments on 4 datasets including MS COCO, PASCAL, Objects365 and DOTA demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-02-25T07:12:51Z) - PointJEM: Self-supervised Point Cloud Understanding for Reducing Feature
Redundancy via Joint Entropy Maximization [10.53900407467811]
We propose PointJEM, a self-supervised representation learning method applied to the point cloud field.
To reduce redundant information in the features, PointJEM maximizes the joint entropy between the different parts.
PointJEM achieves competitive performance in downstream tasks such as classification and segmentation.
arXiv Detail & Related papers (2023-12-06T08:21:42Z) - Refining the ONCE Benchmark with Hyperparameter Tuning [45.55545585587993]
This work focuses on the evaluation of semi-supervised learning approaches for point cloud data.
Data annotation is of paramount importance in the context of LiDAR applications.
We show that improvements from previous semi-supervised methods may not be as profound as previously thought.
arXiv Detail & Related papers (2023-11-10T13:39:07Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - An Efficient Semi-Automated Scheme for Infrastructure LiDAR Annotation [15.523875367380196]
We present an efficient semi-automated annotation tool that automatically annotates LiDAR sequences with tracking algorithms.
Our tool seamlessly integrates multi-object tracking (MOT), single-object tracking (SOT) and suitable trajectory post-processing techniques.
arXiv Detail & Related papers (2023-01-25T17:42:15Z) - Learning Moving-Object Tracking with FMCW LiDAR [53.05551269151209]
We propose a learning-based moving-object tracking method utilizing our newly developed LiDAR sensor, Frequency Modulated Continuous Wave (FMCW) LiDAR.
Given the labels, we propose a contrastive learning framework, which pulls together the features from the same instance in embedding space and pushes apart the features from different instances to improve the tracking quality.
arXiv Detail & Related papers (2022-03-02T09:11:36Z) - Handling Missing Annotations in Supervised Learning Data [0.0]
Activity of Daily Living (ADL) recognition is an example of systems that exploit very large raw sensor data readings.
The size of the generated dataset is so huge that it is almost impossible for a human annotator to give a certain label to every single instance in the dataset.
In this work, we propose and investigate three different paradigms to handle these gaps.
arXiv Detail & Related papers (2020-02-17T18:23:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.