PointJEM: Self-supervised Point Cloud Understanding for Reducing Feature
Redundancy via Joint Entropy Maximization
- URL: http://arxiv.org/abs/2312.03339v1
- Date: Wed, 6 Dec 2023 08:21:42 GMT
- Title: PointJEM: Self-supervised Point Cloud Understanding for Reducing Feature
Redundancy via Joint Entropy Maximization
- Authors: Xin Cao, Huan Xia, Xinxin Han, Yifan Wang, Kang Li, and Linzhi Su
- Abstract summary: We propose PointJEM, a self-supervised representation learning method applied to the point cloud field.
To reduce redundant information in the features, PointJEM maximizes the joint entropy between the different parts.
PointJEM achieves competitive performance in downstream tasks such as classification and segmentation.
- Score: 10.53900407467811
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most deep learning-based point cloud processing methods are supervised and
require large scale of labeled data. However, manual labeling of point cloud
data is laborious and time-consuming. Self-supervised representation learning
can address the aforementioned issue by learning robust and generalized
representations from unlabeled datasets. Nevertheless, the embedded features
obtained by representation learning usually contain redundant information, and
most current methods reduce feature redundancy by linear correlation
constraints. In this paper, we propose PointJEM, a self-supervised
representation learning method applied to the point cloud field. PointJEM
comprises an embedding scheme and a loss function based on joint entropy. The
embedding scheme divides the embedding vector into different parts, each part
can learn a distinctive feature. To reduce redundant information in the
features, PointJEM maximizes the joint entropy between the different parts,
thereby rendering the learned feature variables pairwise independent. To
validate the effectiveness of our method, we conducted experiments on multiple
datasets. The results demonstrate that our method can significantly reduce
feature redundancy beyond linear correlation. Furthermore, PointJEM achieves
competitive performance in downstream tasks such as classification and
segmentation.
Related papers
- Point Cloud Understanding via Attention-Driven Contrastive Learning [64.65145700121442]
Transformer-based models have advanced point cloud understanding by leveraging self-attention mechanisms.
PointACL is an attention-driven contrastive learning framework designed to address these limitations.
Our method employs an attention-driven dynamic masking strategy that guides the model to focus on under-attended regions.
arXiv Detail & Related papers (2024-11-22T05:41:00Z) - Gradient Boosting Mapping for Dimensionality Reduction and Feature Extraction [2.778647101651566]
A fundamental problem in supervised learning is to find a good set of features or distance measures.
We propose a supervised dimensionality reduction method, where the outputs of weak learners define the embedding.
We show that the embedding coordinates provide better features for the supervised learning task.
arXiv Detail & Related papers (2024-05-14T10:23:57Z) - PointMoment:Mixed-Moment-based Self-Supervised Representation Learning
for 3D Point Clouds [11.980787751027872]
We propose PointMoment, a novel framework for point cloud self-supervised representation learning.
Our framework does not require any special techniques such as asymmetric network architectures, gradient stopping, etc.
arXiv Detail & Related papers (2023-12-06T08:49:55Z) - Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
Analysis [74.00441177577295]
Point cloud analysis faces computational system overhead, limiting its application on mobile or edge devices.
This paper explores feature distillation for lightweight point cloud models.
We propose bidirectional knowledge reconfiguration to distill informative contextual knowledge from the teacher to the student.
arXiv Detail & Related papers (2023-10-08T11:32:50Z) - Few-Shot Point Cloud Semantic Segmentation via Contrastive
Self-Supervision and Multi-Resolution Attention [6.350163959194903]
We propose a contrastive self-supervision framework for few-shot learning pretrain.
Specifically, we implement a novel contrastive learning approach with a learnable augmentor for a 3D point cloud.
We develop a multi-resolution attention module using both the nearest and farthest points to extract the local and global point information more effectively.
arXiv Detail & Related papers (2023-02-21T07:59:31Z) - PointSmile: Point Self-supervised Learning via Curriculum Mutual
Information [33.74200235365997]
We propose a reconstruction-free self-supervised learning paradigm by maximizing curriculum mutual information (CMI) across replicas of point cloud objects.
PointSmile is designed to imitate human curriculum learning, starting with an easy curriculum and gradually increasing the difficulty of that curriculum.
We demonstrate the effectiveness and robustness of PointSmile in downstream tasks including object classification and segmentation.
arXiv Detail & Related papers (2023-01-30T09:18:54Z) - Joint Data and Feature Augmentation for Self-Supervised Representation
Learning on Point Clouds [4.723757543677507]
We propose a fusion contrastive learning framework to combine data augmentations in Euclidean space and feature augmentations in feature space.
We conduct extensive object classification experiments and object part segmentation experiments to validate the transferability of the proposed framework.
Experimental results demonstrate that the proposed framework is effective to learn the point cloud representation in a self-supervised manner.
arXiv Detail & Related papers (2022-11-02T14:58:03Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Unsupervised Representation Learning for 3D Point Cloud Data [66.92077180228634]
We propose a simple yet effective approach for unsupervised point cloud learning.
In particular, we identify a very useful transformation which generates a good contrastive version of an original point cloud.
We conduct experiments on three downstream tasks which are 3D object classification, shape part segmentation and scene segmentation.
arXiv Detail & Related papers (2021-10-13T10:52:45Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.