SNAKE: Shape-aware Neural 3D Keypoint Field
- URL: http://arxiv.org/abs/2206.01724v1
- Date: Fri, 3 Jun 2022 17:58:43 GMT
- Title: SNAKE: Shape-aware Neural 3D Keypoint Field
- Authors: Chengliang Zhong, Peixing You, Xiaoxue Chen, Hao Zhao, Fuchun Sun,
Guyue Zhou, Xiaodong Mu, Chuang Gan, Wenbing Huang
- Abstract summary: Detecting 3D keypoints from point clouds is important for shape reconstruction.
This work investigates the dual question: can shape reconstruction benefit 3D keypoint detection?
We propose a novel unsupervised paradigm named SNAKE, which is short for shape-aware neural 3D keypoint field.
- Score: 62.91169625183118
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detecting 3D keypoints from point clouds is important for shape
reconstruction, while this work investigates the dual question: can shape
reconstruction benefit 3D keypoint detection? Existing methods either seek
salient features according to statistics of different orders or learn to
predict keypoints that are invariant to transformation. Nevertheless, the idea
of incorporating shape reconstruction into 3D keypoint detection is
under-explored. We argue that this is restricted by former problem
formulations. To this end, a novel unsupervised paradigm named SNAKE is
proposed, which is short for shape-aware neural 3D keypoint field. Similar to
recent coordinate-based radiance or distance field, our network takes 3D
coordinates as inputs and predicts implicit shape indicators and keypoint
saliency simultaneously, thus naturally entangling 3D keypoint detection and
shape reconstruction. We achieve superior performance on various public
benchmarks, including standalone object datasets ModelNet40, KeypointNet, SMPL
meshes and scene-level datasets 3DMatch and Redwood. Intrinsic shape awareness
brings several advantages as follows. (1) SNAKE generates 3D keypoints
consistent with human semantic annotation, even without such supervision. (2)
SNAKE outperforms counterparts in terms of repeatability, especially when the
input point clouds are down-sampled. (3) the generated keypoints allow accurate
geometric registration, notably in a zero-shot setting. Codes are available at
https://github.com/zhongcl-thu/SNAKE
Related papers
- Back to 3D: Few-Shot 3D Keypoint Detection with Back-Projected 2D Features [64.39691149255717]
Keypoint detection on 3D shapes requires semantic and geometric awareness while demanding high localization accuracy.
We employ a keypoint candidate optimization module which aims to match the average observed distribution of keypoints on the shape.
The resulting approach achieves a new state of the art for few-shot keypoint detection on the KeyPointNet dataset.
arXiv Detail & Related papers (2023-11-29T21:58:41Z) - 3D Implicit Transporter for Temporally Consistent Keypoint Discovery [45.152790256675964]
Keypoint-based representation has proven advantageous in various visual and robotic tasks.
The Transporter method was introduced for 2D data, which reconstructs the target frame from the source frame to incorporate both spatial and temporal information.
We propose the first 3D version of the Transporter, which leverages hybrid 3D representation, cross attention, and implicit reconstruction.
arXiv Detail & Related papers (2023-09-10T17:59:48Z) - Self-supervised Learning of Rotation-invariant 3D Point Set Features using Transformer and its Self-distillation [3.1652399282742536]
This paper proposes a novel self-supervised learning framework for acquiring accurate and rotation-invariant 3D point set features at object-level.
We employ a self-attention mechanism to refine the tokens and aggregate them into an expressive rotation-invariant feature per 3D point set.
Our proposed algorithm learns rotation-invariant 3D point set features that are more accurate than those learned by existing algorithms.
arXiv Detail & Related papers (2023-08-09T06:03:07Z) - Sampling is Matter: Point-guided 3D Human Mesh Reconstruction [0.0]
This paper presents a simple yet powerful method for 3D human mesh reconstruction from a single RGB image.
Experimental results on benchmark datasets show that the proposed method efficiently improves the performance of 3D human mesh reconstruction.
arXiv Detail & Related papers (2023-04-19T08:45:26Z) - KeypointDeformer: Unsupervised 3D Keypoint Discovery for Shape Control [64.46042014759671]
KeypointDeformer is an unsupervised method for shape control through automatically discovered 3D keypoints.
Our approach produces intuitive and semantically consistent control of shape deformations.
arXiv Detail & Related papers (2021-04-22T17:59:08Z) - ParaNet: Deep Regular Representation for 3D Point Clouds [62.81379889095186]
ParaNet is a novel end-to-end deep learning framework for representing 3D point clouds.
It converts an irregular 3D point cloud into a regular 2D color image, named point geometry image (PGI)
In contrast to conventional regular representation modalities based on multi-view projection and voxelization, the proposed representation is differentiable and reversible.
arXiv Detail & Related papers (2020-12-05T13:19:55Z) - D3Feat: Joint Learning of Dense Detection and Description of 3D Local
Features [51.04841465193678]
We leverage a 3D fully convolutional network for 3D point clouds.
We propose a novel and practical learning mechanism that densely predicts both a detection score and a description feature for each 3D point.
Our method achieves state-of-the-art results in both indoor and outdoor scenarios.
arXiv Detail & Related papers (2020-03-06T12:51:09Z) - Implicit Functions in Feature Space for 3D Shape Reconstruction and
Completion [53.885984328273686]
Implicit Feature Networks (IF-Nets) deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data.
IF-Nets clearly outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions.
arXiv Detail & Related papers (2020-03-03T11:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.