You Never Get a Second Chance To Make a Good First Impression: Seeding
Active Learning for 3D Semantic Segmentation
- URL: http://arxiv.org/abs/2304.11762v2
- Date: Tue, 19 Sep 2023 13:05:05 GMT
- Title: You Never Get a Second Chance To Make a Good First Impression: Seeding
Active Learning for 3D Semantic Segmentation
- Authors: Nermin Samet, Oriane Sim\'eoni, Gilles Puy, Georgy Ponimatkin, Renaud
Marlet, Vincent Lepetit
- Abstract summary: We propose SeedAL, a method to seed active learning for efficient annotation of 3D point clouds for semantic segmentation.
Our experiments demonstrate the effectiveness of our approach compared to random seeding and existing methods.
- Score: 29.54515277318063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose SeedAL, a method to seed active learning for efficient annotation
of 3D point clouds for semantic segmentation. Active Learning (AL) iteratively
selects relevant data fractions to annotate within a given budget, but requires
a first fraction of the dataset (a 'seed') to be already annotated to estimate
the benefit of annotating other data fractions. We first show that the choice
of the seed can significantly affect the performance of many AL methods. We
then propose a method for automatically constructing a seed that will ensure
good performance for AL. Assuming that images of the point clouds are
available, which is common, our method relies on powerful unsupervised image
features to measure the diversity of the point clouds. It selects the point
clouds for the seed by optimizing the diversity under an annotation budget,
which can be done by solving a linear optimization problem. Our experiments
demonstrate the effectiveness of our approach compared to random seeding and
existing methods on both the S3DIS and SemanticKitti datasets. Code is
available at https://github.com/nerminsamet/seedal.
Related papers
- Class-Imbalanced Semi-Supervised Learning for Large-Scale Point Cloud
Semantic Segmentation via Decoupling Optimization [64.36097398869774]
Semi-supervised learning (SSL) has been an active research topic for large-scale 3D scene understanding.
The existing SSL-based methods suffer from severe training bias due to class imbalance and long-tail distributions of the point cloud data.
We introduce a new decoupling optimization framework, which disentangles feature representation learning and classifier in an alternative optimization manner to shift the bias decision boundary effectively.
arXiv Detail & Related papers (2024-01-13T04:16:40Z) - Multi-modality Affinity Inference for Weakly Supervised 3D Semantic
Segmentation [47.81638388980828]
We propose a simple yet effective scene-level weakly supervised point cloud segmentation method with a newly introduced multi-modality point affinity inference module.
Our method outperforms the state-of-the-art by 4% to 6% mIoU on the ScanNet and S3DIS benchmarks.
arXiv Detail & Related papers (2023-12-27T14:01:35Z) - LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds [62.49198183539889]
We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
arXiv Detail & Related papers (2022-10-14T19:13:36Z) - Open-Set Semi-Supervised Learning for 3D Point Cloud Understanding [62.17020485045456]
It is commonly assumed in semi-supervised learning (SSL) that the unlabeled data are drawn from the same distribution as that of the labeled ones.
We propose to selectively utilize unlabeled data through sample weighting, so that only conducive unlabeled data would be prioritized.
arXiv Detail & Related papers (2022-05-02T16:09:17Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Active Learning for Point Cloud Semantic Segmentation via
Spatial-Structural Diversity Reasoning [38.756609521163604]
In this paper, we propose a novel active learning-based method to tackle this problem.
Dubbed SSDR-AL, our method groups the original point clouds into superpoints and selects the most informative and representative ones for label acquisition.
To deploy SSDR-AL in a more practical scenario, we design a noise aware iterative labeling scheme to confront the "noisy annotation" problem.
arXiv Detail & Related papers (2022-02-25T10:06:47Z) - Unsupervised Representation Learning for 3D Point Cloud Data [66.92077180228634]
We propose a simple yet effective approach for unsupervised point cloud learning.
In particular, we identify a very useful transformation which generates a good contrastive version of an original point cloud.
We conduct experiments on three downstream tasks which are 3D object classification, shape part segmentation and scene segmentation.
arXiv Detail & Related papers (2021-10-13T10:52:45Z) - Learning Semantic Segmentation of Large-Scale Point Clouds with Random
Sampling [52.464516118826765]
We introduce RandLA-Net, an efficient and lightweight neural architecture to infer per-point semantics for large-scale point clouds.
The key to our approach is to use random point sampling instead of more complex point selection approaches.
Our RandLA-Net can process 1 million points in a single pass up to 200x faster than existing approaches.
arXiv Detail & Related papers (2021-07-06T05:08:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.