Multi-to-Single Knowledge Distillation for Point Cloud Semantic
Segmentation
- URL: http://arxiv.org/abs/2304.14800v1
- Date: Fri, 28 Apr 2023 12:17:08 GMT
- Title: Multi-to-Single Knowledge Distillation for Point Cloud Semantic
Segmentation
- Authors: Shoumeng Qiu, Feng Jiang, Haiqiang Zhang, Xiangyang Xue and Jian Pu
- Abstract summary: We propose a novel multi-to-single knowledge distillation framework for the 3D point cloud semantic segmentation task.
Instead of fusing all the points of multi-scans directly, only the instances that belong to the previously defined hard classes are fused.
- Score: 41.02741249858771
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D point cloud semantic segmentation is one of the fundamental tasks for
environmental understanding. Although significant progress has been made in
recent years, the performance of classes with few examples or few points is
still far from satisfactory. In this paper, we propose a novel multi-to-single
knowledge distillation framework for the 3D point cloud semantic segmentation
task to boost the performance of those hard classes. Instead of fusing all the
points of multi-scans directly, only the instances that belong to the
previously defined hard classes are fused. To effectively and sufficiently
distill valuable knowledge from multi-scans, we leverage a multilevel
distillation framework, i.e., feature representation distillation, logit
distillation, and affinity distillation. We further develop a novel
instance-aware affinity distillation algorithm for capturing high-level
structural knowledge to enhance the distillation efficacy for hard classes.
Finally, we conduct experiments on the SemanticKITTI dataset, and the results
on both the validation and test sets demonstrate that our method yields
substantial improvements compared with the baseline method. The code is
available at \Url{https://github.com/skyshoumeng/M2SKD}.
Related papers
- LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds [62.49198183539889]
We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
arXiv Detail & Related papers (2022-10-14T19:13:36Z) - FAKD: Feature Augmented Knowledge Distillation for Semantic Segmentation [17.294737459735675]
We explore data augmentations for knowledge distillation on semantic segmentation.
Inspired by the recent progress on semantic directions on feature-space, we propose to include augmentations in feature space for efficient distillation.
arXiv Detail & Related papers (2022-08-30T10:55:31Z) - Point-to-Voxel Knowledge Distillation for LiDAR Semantic Segmentation [74.67594286008317]
This article addresses the problem of distilling knowledge from a large teacher model to a slim student network for LiDAR semantic segmentation.
We propose the Point-to-Voxel Knowledge Distillation (PVD), which transfers the hidden knowledge from both point level and voxel level.
arXiv Detail & Related papers (2022-06-05T05:28:32Z) - Knowledge Distillation Meets Open-Set Semi-Supervised Learning [69.21139647218456]
We propose a novel em modelname (bfem shortname) method dedicated for distilling representational knowledge semantically from a pretrained teacher to a target student.
At the problem level, this establishes an interesting connection between knowledge distillation with open-set semi-supervised learning (SSL)
Our shortname outperforms significantly previous state-of-the-art knowledge distillation methods on both coarse object classification and fine face recognition tasks.
arXiv Detail & Related papers (2022-05-13T15:15:27Z) - Uncertainty-aware Contrastive Distillation for Incremental Semantic
Segmentation [46.14545656625703]
catastrophic forgetting is the tendency of neural networks to fail to preserve the knowledge acquired from old tasks when learning new tasks.
We propose a novel distillation framework, Uncertainty-aware Contrastive Distillation (method)
Our results demonstrate the advantage of the proposed distillation technique, which can be used in synergy with previous IL approaches.
arXiv Detail & Related papers (2022-03-26T15:32:12Z) - Unsupervised Representation Learning for 3D Point Cloud Data [66.92077180228634]
We propose a simple yet effective approach for unsupervised point cloud learning.
In particular, we identify a very useful transformation which generates a good contrastive version of an original point cloud.
We conduct experiments on three downstream tasks which are 3D object classification, shape part segmentation and scene segmentation.
arXiv Detail & Related papers (2021-10-13T10:52:45Z) - Few-shot 3D Point Cloud Semantic Segmentation [138.80825169240302]
We propose a novel attention-aware multi-prototype transductive few-shot point cloud semantic segmentation method.
Our proposed method shows significant and consistent improvements compared to baselines in different few-shot point cloud semantic segmentation settings.
arXiv Detail & Related papers (2020-06-22T08:05:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.