Joint Learning for Scattered Point Cloud Understanding with Hierarchical Self-Distillation
- URL: http://arxiv.org/abs/2312.16902v2
- Date: Sun, 17 Nov 2024 03:12:06 GMT
- Title: Joint Learning for Scattered Point Cloud Understanding with Hierarchical Self-Distillation
- Authors: Kaiyue Zhou, Ming Dong, Peiyuan Zhi, Shengjin Wang,
- Abstract summary: We propose an end-to-end architecture that compensates for and identifies partial point clouds on the fly.
hierarchical self-distillation (HSD) can be applied to arbitrary hierarchy-based point cloud methods.
- Score: 34.26170741722835
- License:
- Abstract: Numerous point-cloud understanding techniques focus on whole entities and have succeeded in obtaining satisfactory results and limited sparsity tolerance. However, these methods are generally sensitive to incomplete point clouds that are scanned with flaws or large gaps. To address this issue, in this paper, we propose an end-to-end architecture that compensates for and identifies partial point clouds on the fly. First, we propose a cascaded solution that integrates both the upstream and downstream networks simultaneously, allowing the task-oriented downstream to identify the points generated by the completion-oriented upstream. These two streams complement each other, resulting in improved performance for both completion and downstream-dependent tasks. Second, to explicitly understand the predicted points' pattern, we introduce hierarchical self-distillation (HSD), which can be applied to arbitrary hierarchy-based point cloud methods. HSD ensures that the deepest classifier with a larger perceptual field and longer code length provides additional regularization to intermediate ones rather than simply aggregating the multi-scale features, and therefore maximizing the mutual information between a teacher and students. We show the advantage of the self-distillation process in the hyperspaces based on the information bottleneck principle. On the classification task, our proposed method performs competitively on the synthetic dataset and achieves superior results on the challenging real-world benchmark when compared to the state-of-the-art models. Additional experiments also demonstrate the superior performance and generality of our framework on the part segmentation task.
Related papers
- Global Attention-Guided Dual-Domain Point Cloud Feature Learning for Classification and Segmentation [21.421806351869552]
We propose a Global Attention-guided Dual-domain Feature Learning network (GAD) to address the above-mentioned issues.
We first devise the Contextual Position-enhanced Transformer (CPT) module, which is armed with an improved global attention mechanism.
Then, the Dual-domain K-nearest neighbor Feature Fusion (DKFF) is cascaded to conduct effective feature aggregation.
arXiv Detail & Related papers (2024-07-12T05:19:19Z) - PointJEM: Self-supervised Point Cloud Understanding for Reducing Feature
Redundancy via Joint Entropy Maximization [10.53900407467811]
We propose PointJEM, a self-supervised representation learning method applied to the point cloud field.
To reduce redundant information in the features, PointJEM maximizes the joint entropy between the different parts.
PointJEM achieves competitive performance in downstream tasks such as classification and segmentation.
arXiv Detail & Related papers (2023-12-06T08:21:42Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
Analysis [74.00441177577295]
Point cloud analysis faces computational system overhead, limiting its application on mobile or edge devices.
This paper explores feature distillation for lightweight point cloud models.
We propose bidirectional knowledge reconfiguration to distill informative contextual knowledge from the teacher to the student.
arXiv Detail & Related papers (2023-10-08T11:32:50Z) - A Deep Dive into Deep Cluster [0.2578242050187029]
DeepCluster is a simple and scalable unsupervised pretraining of visual representations.
We show that DeepCluster convergence and performance depend on the interplay between the quality of the randomly filters of the convolutional layer and the selected number of clusters.
arXiv Detail & Related papers (2022-07-24T22:55:09Z) - Point-to-Voxel Knowledge Distillation for LiDAR Semantic Segmentation [74.67594286008317]
This article addresses the problem of distilling knowledge from a large teacher model to a slim student network for LiDAR semantic segmentation.
We propose the Point-to-Voxel Knowledge Distillation (PVD), which transfers the hidden knowledge from both point level and voxel level.
arXiv Detail & Related papers (2022-06-05T05:28:32Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Continual Coarse-to-Fine Domain Adaptation in Semantic Segmentation [22.366638308792734]
Deep neural networks are typically trained in a single shot for a specific task and data distribution.
In real world settings both the task and the domain of application can change.
We introduce the novel task of coarse-to-fine learning of semantic segmentation architectures in presence of domain shift.
arXiv Detail & Related papers (2022-01-18T13:31:19Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z) - Cascaded Refinement Network for Point Cloud Completion with
Self-supervision [74.80746431691938]
We introduce a two-branch network for shape completion.
The first branch is a cascaded shape completion sub-network to synthesize complete objects.
The second branch is an auto-encoder to reconstruct the original partial input.
arXiv Detail & Related papers (2020-10-17T04:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.