D-Net: Learning for Distinctive Point Clouds by Self-Attentive Point
Searching and Learnable Feature Fusion
- URL: http://arxiv.org/abs/2305.05842v1
- Date: Wed, 10 May 2023 02:19:00 GMT
- Title: D-Net: Learning for Distinctive Point Clouds by Self-Attentive Point
Searching and Learnable Feature Fusion
- Authors: Xinhai Liu, Zhizhong Han, Sanghuk Lee, Yan-Pei Cao, Yu-Shen Liu
- Abstract summary: We propose D-Net to learn for distinctive point clouds based on a self-attentive point searching and a learnable feature fusion.
To generate a compact feature representation for each distinctive point set, a stacked self-gated convolution is proposed to extract the distinctive features.
The results show that the learned distinction distribution of a point cloud is highly consistent with objects of the same class and different from objects of other classes.
- Score: 48.57170130169045
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning and selecting important points on a point cloud is crucial for point
cloud understanding in various applications. Most of early methods selected the
important points on 3D shapes by analyzing the intrinsic geometric properties
of every single shape, which fails to capture the importance of points that
distinguishes a shape from objects of other classes, i.e., the distinction of
points. To address this problem, we propose D-Net (Distinctive Network) to
learn for distinctive point clouds based on a self-attentive point searching
and a learnable feature fusion. Specifically, in the self-attentive point
searching, we first learn the distinction score for each point to reveal the
distinction distribution of the point cloud. After ranking the learned
distinction scores, we group a point cloud into a high distinctive point set
and a low distinctive one to enrich the fine-grained point cloud structure. To
generate a compact feature representation for each distinctive point set, a
stacked self-gated convolution is proposed to extract the distinctive features.
Finally, we further introduce a learnable feature fusion mechanism to aggregate
multiple distinctive features into a global point cloud representation in a
channel-wise aggregation manner. The results also show that the learned
distinction distribution of a point cloud is highly consistent with objects of
the same class and different from objects of other classes. Extensive
experiments on public datasets, including ModelNet and ShapeNet part dataset,
demonstrate the ability to learn for distinctive point clouds, which helps to
achieve the state-of-the-art performance in some shape understanding
applications.
Related papers
- Cross-Modal Information-Guided Network using Contrastive Learning for
Point Cloud Registration [17.420425069785946]
We present a novel Cross-Modal Information-Guided Network (CMIGNet) for point cloud registration.
We first incorporate the projected images from the point clouds and fuse the cross-modal features using the attention mechanism.
We employ two contrastive learning strategies, namely overlapping contrastive learning and cross-modal contrastive learning.
arXiv Detail & Related papers (2023-11-02T12:56:47Z) - Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
Analysis [74.00441177577295]
Point cloud analysis faces computational system overhead, limiting its application on mobile or edge devices.
This paper explores feature distillation for lightweight point cloud models.
We propose bidirectional knowledge reconfiguration to distill informative contextual knowledge from the teacher to the student.
arXiv Detail & Related papers (2023-10-08T11:32:50Z) - Clustering based Point Cloud Representation Learning for 3D Analysis [80.88995099442374]
We propose a clustering based supervised learning scheme for point cloud analysis.
Unlike current de-facto, scene-wise training paradigm, our algorithm conducts within-class clustering on the point embedding space.
Our algorithm shows notable improvements on famous point cloud segmentation datasets.
arXiv Detail & Related papers (2023-07-27T03:42:12Z) - Self-Supervised Feature Learning from Partial Point Clouds via Pose
Disentanglement [35.404285596482175]
We propose a novel self-supervised framework to learn informative representations from partial point clouds.
We leverage partial point clouds scanned by LiDAR that contain both content and pose attributes.
Our method not only outperforms existing self-supervised methods, but also shows a better generalizability across synthetic and real-world datasets.
arXiv Detail & Related papers (2022-01-09T14:12:50Z) - Unsupervised Representation Learning for 3D Point Cloud Data [66.92077180228634]
We propose a simple yet effective approach for unsupervised point cloud learning.
In particular, we identify a very useful transformation which generates a good contrastive version of an original point cloud.
We conduct experiments on three downstream tasks which are 3D object classification, shape part segmentation and scene segmentation.
arXiv Detail & Related papers (2021-10-13T10:52:45Z) - One Point is All You Need: Directional Attention Point for Feature
Learning [51.44837108615402]
We present a novel attention-based mechanism for learning enhanced point features for tasks such as point cloud classification and segmentation.
We show that our attention mechanism can be easily incorporated into state-of-the-art point cloud classification and segmentation networks.
arXiv Detail & Related papers (2020-12-11T11:45:39Z) - Multi-scale Receptive Fields Graph Attention Network for Point Cloud
Classification [35.88116404702807]
The proposed MRFGAT architecture is tested on ModelNet10 and ModelNet40 datasets.
Results show it achieves state-of-the-art performance in shape classification tasks.
arXiv Detail & Related papers (2020-09-28T13:01:28Z) - SSN: Shape Signature Networks for Multi-class Object Detection from
Point Clouds [96.51884187479585]
We propose a novel 3D shape signature to explore the shape information from point clouds.
By incorporating operations of symmetry, convex hull and chebyshev fitting, the proposed shape sig-nature is not only compact and effective but also robust to the noise.
Experiments show that the proposed method performs remarkably better than existing methods on two large-scale datasets.
arXiv Detail & Related papers (2020-04-06T16:01:41Z) - SK-Net: Deep Learning on Point Cloud via End-to-end Discovery of Spatial
Keypoints [7.223394571022494]
This paper presents an end-to-end framework, SK-Net, to jointly optimize the inference of spatial keypoint with the learning of feature representation of a point cloud.
Our proposed method performs better than or comparable with the state-of-the-art approaches in point cloud tasks.
arXiv Detail & Related papers (2020-03-31T08:15:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.