HAL3D: Hierarchical Active Learning for Fine-Grained 3D Part Labeling
- URL: http://arxiv.org/abs/2301.10460v2
- Date: Mon, 1 Apr 2024 17:04:01 GMT
- Title: HAL3D: Hierarchical Active Learning for Fine-Grained 3D Part Labeling
- Authors: Fenggen Yu, Yiming Qian, Francisca Gil-Ureta, Brian Jackson, Eric Bennett, Hao Zhang,
- Abstract summary: We present the first active learning tool for fine-grained 3D part labeling.
Our tool iteratively verifies or modifies part labels predicted by a deep neural network.
Our human-in-the-loop approach, coined HAL3D, achieves 100% accuracy on any test set with pre-defined hierarchical part labels.
- Score: 16.74185233682209
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present the first active learning tool for fine-grained 3D part labeling, a problem which challenges even the most advanced deep learning (DL) methods due to the significant structural variations among the small and intricate parts. For the same reason, the necessary data annotation effort is tremendous, motivating approaches to minimize human involvement. Our labeling tool iteratively verifies or modifies part labels predicted by a deep neural network, with human feedback continually improving the network prediction. To effectively reduce human efforts, we develop two novel features in our tool, hierarchical and symmetry-aware active labeling. Our human-in-the-loop approach, coined HAL3D, achieves 100% accuracy (barring human errors) on any test set with pre-defined hierarchical part labels, with 80% time-saving over manual effort.
Related papers
- TrajSSL: Trajectory-Enhanced Semi-Supervised 3D Object Detection [59.498894868956306]
Pseudo-labeling approaches to semi-supervised learning adopt a teacher-student framework.
We leverage pre-trained motion-forecasting models to generate object trajectories on pseudo-labeled data.
Our approach improves pseudo-label quality in two distinct manners.
arXiv Detail & Related papers (2024-09-17T05:35:00Z) - Towards Modality-agnostic Label-efficient Segmentation with Entropy-Regularized Distribution Alignment [62.73503467108322]
This topic is widely studied in 3D point cloud segmentation due to the difficulty of annotating point clouds densely.
Until recently, pseudo-labels have been widely employed to facilitate training with limited ground-truth labels.
Existing pseudo-labeling approaches could suffer heavily from the noises and variations in unlabelled data.
We propose a novel learning strategy to regularize the pseudo-labels generated for training, thus effectively narrowing the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2024-08-29T13:31:15Z) - Hardness-Aware Scene Synthesis for Semi-Supervised 3D Object Detection [59.33188668341604]
3D object detection serves as the fundamental task of autonomous driving perception.
It is costly to obtain high-quality annotations for point cloud data.
We propose a hardness-aware scene synthesis (HASS) method to generate adaptive synthetic scenes.
arXiv Detail & Related papers (2024-05-27T17:59:23Z) - You Only Need One Thing One Click: Self-Training for Weakly Supervised
3D Scene Understanding [107.06117227661204]
We propose One Thing One Click'', meaning that the annotator only needs to label one point per object.
We iteratively conduct the training and label propagation, facilitated by a graph propagation module.
Our model can be compatible to 3D instance segmentation equipped with a point-clustering strategy.
arXiv Detail & Related papers (2023-03-26T13:57:00Z) - Towards Label-Efficient Incremental Learning: A Survey [42.603603392991715]
We study incremental learning, where a learner is required to adapt to an incoming stream of data with a varying distribution.
We identify three subdivisions, namely semi-, few-shot- and self-supervised learning to reduce labeling efforts.
arXiv Detail & Related papers (2023-02-01T10:24:55Z) - One Thing One Click: A Self-Training Approach for Weakly Supervised 3D
Semantic Segmentation [78.36781565047656]
We propose "One Thing One Click," meaning that the annotator only needs to label one point per object.
We iteratively conduct the training and label propagation, facilitated by a graph propagation module.
Our results are also comparable to those of the fully supervised counterparts.
arXiv Detail & Related papers (2021-04-06T02:27:25Z) - Self-supervised Human Activity Recognition by Learning to Predict
Cross-Dimensional Motion [16.457778420360537]
We propose the use of self-supervised learning for human activity recognition with smartphone accelerometer data.
First, the representations of unlabeled input signals are learned by training a deep convolutional neural network to predict a segment of accelerometer values.
For this task, we add a number of fully connected layers to the end of the frozen network and train the added layers with labeled accelerometer signals to learn to classify human activities.
arXiv Detail & Related papers (2020-10-21T02:14:31Z) - Personalized Activity Recognition with Deep Triplet Embeddings [2.1320960069210475]
We present an approach to personalized activity recognition based on deep embeddings derived from a fully convolutional neural network.
We evaluate these methods on three publicly available inertial human activity recognition data sets.
arXiv Detail & Related papers (2020-01-15T19:17:02Z) - SESS: Self-Ensembling Semi-Supervised 3D Object Detection [138.80825169240302]
We propose SESS, a self-ensembling semi-supervised 3D object detection framework. Specifically, we design a thorough perturbation scheme to enhance generalization of the network on unlabeled and new unseen data.
Our SESS achieves competitive performance compared to the state-of-the-art fully-supervised method by using only 50% labeled data.
arXiv Detail & Related papers (2019-12-26T08:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.