ClickSeg: 3D Instance Segmentation with Click-Level Weak Annotations
- URL: http://arxiv.org/abs/2307.09732v1
- Date: Wed, 19 Jul 2023 02:49:44 GMT
- Title: ClickSeg: 3D Instance Segmentation with Click-Level Weak Annotations
- Authors: Leyao Liu, Tao Kong, Minzhao Zhu, Jiashuo Fan, Lu Fang
- Abstract summary: 3D instance segmentation methods often require fully-annotated dense labels for training.
We present ClickSeg, a novel click-level weakly supervised 3D instance segmentation method.
ClickSeg achieves $sim$90% of the accuracy of the fully-supervised counterpart.
- Score: 29.231508413247457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D instance segmentation methods often require fully-annotated dense labels
for training, which are costly to obtain. In this paper, we present ClickSeg, a
novel click-level weakly supervised 3D instance segmentation method that
requires one point per instance annotation merely. Such a problem is very
challenging due to the extremely limited labels, which has rarely been solved
before. We first develop a baseline weakly-supervised training method, which
generates pseudo labels for unlabeled data by the model itself. To utilize the
property of click-level annotation setting, we further propose a new training
framework. Instead of directly using the model inference way, i.e., mean-shift
clustering, to generate the pseudo labels, we propose to use k-means with fixed
initial seeds: the annotated points. New similarity metrics are further
designed for clustering. Experiments on ScanNetV2 and S3DIS datasets show that
the proposed ClickSeg surpasses the previous best weakly supervised instance
segmentation result by a large margin (e.g., +9.4% mAP on ScanNetV2). Using
0.02% supervision signals merely, ClickSeg achieves $\sim$90% of the accuracy
of the fully-supervised counterpart. Meanwhile, it also achieves
state-of-the-art semantic segmentation results among weakly supervised methods
that use the same annotation settings.
Related papers
- Instance Consistency Regularization for Semi-Supervised 3D Instance Segmentation [50.51125319374404]
We propose a novel self-training network InsTeacher3D to explore and exploit pure instance knowledge from unlabeled data.
Experimental results on multiple large-scale datasets show that the InsTeacher3D significantly outperforms prior state-of-the-art semi-supervised approaches.
arXiv Detail & Related papers (2024-06-24T16:35:58Z) - Weakly Supervised 3D Instance Segmentation without Instance-level
Annotations [57.615325809883636]
3D semantic scene understanding tasks have achieved great success with the emergence of deep learning, but often require a huge amount of manually annotated training data.
We propose the first weakly-supervised 3D instance segmentation method that only requires categorical semantic labels as supervision.
By generating pseudo instance labels from categorical semantic labels, our designed approach can also assist existing methods for learning 3D instance segmentation at reduced annotation cost.
arXiv Detail & Related papers (2023-08-03T12:30:52Z) - You Only Need One Thing One Click: Self-Training for Weakly Supervised
3D Scene Understanding [107.06117227661204]
We propose One Thing One Click'', meaning that the annotator only needs to label one point per object.
We iteratively conduct the training and label propagation, facilitated by a graph propagation module.
Our model can be compatible to 3D instance segmentation equipped with a point-clustering strategy.
arXiv Detail & Related papers (2023-03-26T13:57:00Z) - Collaborative Propagation on Multiple Instance Graphs for 3D Instance
Segmentation with Single-point Supervision [63.429704654271475]
We propose a novel weakly supervised method RWSeg that only requires labeling one object with one point.
With these sparse weak labels, we introduce a unified framework with two branches to propagate semantic and instance information.
Specifically, we propose a Cross-graph Competing Random Walks (CRW) algorithm that encourages competition among different instance graphs.
arXiv Detail & Related papers (2022-08-10T02:14:39Z) - One Thing One Click: A Self-Training Approach for Weakly Supervised 3D
Semantic Segmentation [78.36781565047656]
We propose "One Thing One Click," meaning that the annotator only needs to label one point per object.
We iteratively conduct the training and label propagation, facilitated by a graph propagation module.
Our results are also comparable to those of the fully supervised counterparts.
arXiv Detail & Related papers (2021-04-06T02:27:25Z) - SegGroup: Seg-Level Supervision for 3D Instance and Semantic
Segmentation [88.22349093672975]
We design a weakly supervised point cloud segmentation algorithm that only requires clicking on one point per instance to indicate its location for annotation.
With over-segmentation for pre-processing, we extend these location annotations into segments as seg-level labels.
We show that our seg-level supervised method (SegGroup) achieves comparable results with the fully annotated point-level supervised methods.
arXiv Detail & Related papers (2020-12-18T13:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.