Unsupervised Non-Rigid Point Cloud Matching through Large Vision Models
- URL: http://arxiv.org/abs/2408.08568v1
- Date: Fri, 16 Aug 2024 07:02:19 GMT
- Title: Unsupervised Non-Rigid Point Cloud Matching through Large Vision Models
- Authors: Zhangquan Chen, Puhua Jiang, Ruqi Huang,
- Abstract summary: We propose a learning-based framework for non-rigid point cloud matching.
Key insight is to incorporate semantic features derived from large vision models (LVMs)
Our framework effectively leverages the structural information contained in the semantic features to address ambiguities arise from self-similarities among local geometries.
- Score: 1.3030624795284795
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a novel learning-based framework for non-rigid point cloud matching, which can be trained purely on point clouds without any correspondence annotation but also be extended naturally to partial-to-full matching. Our key insight is to incorporate semantic features derived from large vision models (LVMs) to geometry-based shape feature learning. Our framework effectively leverages the structural information contained in the semantic features to address ambiguities arise from self-similarities among local geometries. Furthermore, our framework also enjoys the strong generalizability and robustness regarding partial observations of LVMs, leading to improvements in the regarding point cloud matching tasks. In order to achieve the above, we propose a pixel-to-point feature aggregation module, a local and global attention network as well as a geometrical similarity loss function. Experimental results show that our method achieves state-of-the-art results in matching non-rigid point clouds in both near-isometric and heterogeneous shape collection as well as more realistic partial and noisy data.
Related papers
- Self-supervised 3D Point Cloud Completion via Multi-view Adversarial Learning [61.14132533712537]
We propose MAL-SPC, a framework that effectively leverages both object-level and category-specific geometric similarities to complete missing structures.
Our MAL-SPC does not require any 3D complete supervision and only necessitates a single partial point cloud for each object.
arXiv Detail & Related papers (2024-07-13T06:53:39Z) - Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
Analysis [74.00441177577295]
Point cloud analysis faces computational system overhead, limiting its application on mobile or edge devices.
This paper explores feature distillation for lightweight point cloud models.
We propose bidirectional knowledge reconfiguration to distill informative contextual knowledge from the teacher to the student.
arXiv Detail & Related papers (2023-10-08T11:32:50Z) - Variational Relational Point Completion Network for Robust 3D
Classification [59.80993960827833]
Vari point cloud completion methods tend to generate global shape skeletons hence lack fine local details.
This paper proposes a variational framework, point Completion Network (VRCNet) with two appealing properties.
VRCNet shows great generalizability and robustness on real-world point cloud scans.
arXiv Detail & Related papers (2023-04-18T17:03:20Z) - Self-Supervised Learning for Multimodal Non-Rigid 3D Shape Matching [15.050801537501462]
We introduce a self-supervised multimodal learning strategy that combines mesh-based functional map regularisation with a contrastive loss that couples mesh and point cloud data.
Our shape matching approach allows to obtain intramodal correspondences for triangle meshes, complete point clouds, and partially observed point clouds.
We demonstrate that our method achieves state-of-the-art results on several challenging benchmark datasets.
arXiv Detail & Related papers (2023-03-20T09:47:02Z) - Self-Supervised Feature Learning from Partial Point Clouds via Pose
Disentanglement [35.404285596482175]
We propose a novel self-supervised framework to learn informative representations from partial point clouds.
We leverage partial point clouds scanned by LiDAR that contain both content and pose attributes.
Our method not only outperforms existing self-supervised methods, but also shows a better generalizability across synthetic and real-world datasets.
arXiv Detail & Related papers (2022-01-09T14:12:50Z) - Semantic Segmentation for Real Point Cloud Scenes via Bilateral
Augmentation and Adaptive Fusion [38.05362492645094]
Real point cloud scenes can intuitively capture complex surroundings in the real world, but due to 3D data's raw nature, it is very challenging for machine perception.
We concentrate on the essential visual task, semantic segmentation, for large-scale point cloud data collected in reality.
By comparing with state-of-the-art networks on three different benchmarks, we demonstrate the effectiveness of our network.
arXiv Detail & Related papers (2021-03-12T04:13:20Z) - 3D Object Classification on Partial Point Clouds: A Practical
Perspective [91.81377258830703]
A point cloud is a popular shape representation adopted in 3D object classification.
This paper introduces a practical setting to classify partial point clouds of object instances under any poses.
A novel algorithm in an alignment-classification manner is proposed in this paper.
arXiv Detail & Related papers (2020-12-18T04:00:56Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z) - Mapping in a cycle: Sinkhorn regularized unsupervised learning for point
cloud shapes [47.49826669394906]
We propose an unsupervised learning framework for finding dense correspondences between point cloud shapes.
In order to learn discriminative pointwise features from point cloud data, we incorporate in the formulation a regularization term based on Sinkhorn normalization.
We show that the learned pointwise features can be leveraged by supervised methods to improve the part segmentation performance.
arXiv Detail & Related papers (2020-07-19T05:21:33Z) - Shape-Oriented Convolution Neural Network for Point Cloud Analysis [59.405388577930616]
Point cloud is a principal data structure adopted for 3D geometric information encoding.
Shape-oriented message passing scheme dubbed ShapeConv is proposed to focus on the representation learning of the underlying shape formed by each local neighboring point.
arXiv Detail & Related papers (2020-04-20T16:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.