Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
Analysis
- URL: http://arxiv.org/abs/2310.05125v1
- Date: Sun, 8 Oct 2023 11:32:50 GMT
- Title: Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
Analysis
- Authors: Peipei Li, Xing Cui, Yibo Hu, Man Zhang, Ting Yao, Tao Mei
- Abstract summary: Point cloud analysis faces computational system overhead, limiting its application on mobile or edge devices.
This paper explores feature distillation for lightweight point cloud models.
We propose bidirectional knowledge reconfiguration to distill informative contextual knowledge from the teacher to the student.
- Score: 74.00441177577295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud analysis faces computational system overhead, limiting its
application on mobile or edge devices. Directly employing small models may
result in a significant drop in performance since it is difficult for a small
model to adequately capture local structure and global shape information
simultaneously, which are essential clues for point cloud analysis. This paper
explores feature distillation for lightweight point cloud models. To mitigate
the semantic gap between the lightweight student and the cumbersome teacher, we
propose bidirectional knowledge reconfiguration (BKR) to distill informative
contextual knowledge from the teacher to the student. Specifically, a top-down
knowledge reconfiguration and a bottom-up knowledge reconfiguration are
developed to inherit diverse local structure information and consistent global
shape knowledge from the teacher, respectively. However, due to the farthest
point sampling in most point cloud models, the intermediate features between
teacher and student are misaligned, deteriorating the feature distillation
performance. To eliminate it, we propose a feature mover's distance (FMD) loss
based on optimal transportation, which can measure the distance between
unordered point cloud features effectively. Extensive experiments conducted on
shape classification, part segmentation, and semantic segmentation benchmarks
demonstrate the universality and superiority of our method.
Related papers
- Unsupervised Non-Rigid Point Cloud Matching through Large Vision Models [1.3030624795284795]
We propose a learning-based framework for non-rigid point cloud matching.
Key insight is to incorporate semantic features derived from large vision models (LVMs)
Our framework effectively leverages the structural information contained in the semantic features to address ambiguities arise from self-similarities among local geometries.
arXiv Detail & Related papers (2024-08-16T07:02:19Z) - Mitigating Prior Shape Bias in Point Clouds via Differentiable Center Learning [19.986150101882217]
We introduce a novel solution called the Differentiable Center Sampling Network (DCS-Net)
It tackles the information leakage problem by incorporating both global feature reconstruction and local feature reconstruction as non-trivial proxy tasks.
Experimental results demonstrate that our method enhances the expressive capacity of existing point cloud models.
arXiv Detail & Related papers (2024-02-03T08:58:23Z) - PointMoment:Mixed-Moment-based Self-Supervised Representation Learning
for 3D Point Clouds [11.980787751027872]
We propose PointMoment, a novel framework for point cloud self-supervised representation learning.
Our framework does not require any special techniques such as asymmetric network architectures, gradient stopping, etc.
arXiv Detail & Related papers (2023-12-06T08:49:55Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - Point2Vec for Self-Supervised Representation Learning on Point Clouds [66.53955515020053]
We extend data2vec to the point cloud domain and report encouraging results on several downstream tasks.
We propose point2vec, which unleashes the full potential of data2vec-like pre-training on point clouds.
arXiv Detail & Related papers (2023-03-29T10:08:29Z) - Distillation with Contrast is All You Need for Self-Supervised Point
Cloud Representation Learning [53.90317574898643]
We propose a simple and general framework for self-supervised point cloud representation learning.
Inspired by how human beings understand the world, we utilize knowledge distillation to learn both global shape information and the relationship between global shape and local structures.
Our method achieves the state-of-the-art performance on linear classification and multiple other downstream tasks.
arXiv Detail & Related papers (2022-02-09T02:51:59Z) - DRINet: A Dual-Representation Iterative Learning Network for Point Cloud
Segmentation [45.768040873409824]
DRINet serves as the basic network structure for dual-representation learning.
Our network achieves state-of-the-art results for point cloud classification and segmentation tasks.
For large-scale outdoor scenarios, our method outperforms state-of-the-art methods with a real-time inference speed of 62ms per frame.
arXiv Detail & Related papers (2021-08-09T13:23:54Z) - Point Discriminative Learning for Unsupervised Representation Learning
on 3D Point Clouds [54.31515001741987]
We propose a point discriminative learning method for unsupervised representation learning on 3D point clouds.
We achieve this by imposing a novel point discrimination loss on the middle level and global level point features.
Our method learns powerful representations and achieves new state-of-the-art performance.
arXiv Detail & Related papers (2021-08-04T15:11:48Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z) - Airborne LiDAR Point Cloud Classification with Graph Attention
Convolution Neural Network [5.69168146446103]
We present a graph attention convolution neural network (GACNN) that can be directly applied to the classification of unstructured 3D point clouds obtained by airborne LiDAR.
Based on the proposed graph attention convolution module, we further design an end-to-end encoder-decoder network, named GACNN, to capture multiscale features of the point clouds.
Experiments on the ISPRS 3D labeling dataset show that the proposed model achieves a new state-of-the-art performance in terms of average F1 score (71.5%) and a satisfying overall accuracy (83.2%)
arXiv Detail & Related papers (2020-04-20T05:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.