Unsupervised Learning of Category-Specific Symmetric 3D Keypoints from
Point Sets
- URL: http://arxiv.org/abs/2003.07619v3
- Date: Wed, 6 Jan 2021 09:56:02 GMT
- Title: Unsupervised Learning of Category-Specific Symmetric 3D Keypoints from
Point Sets
- Authors: Clara Fernandez-Labrador, Ajad Chhatkuli, Danda Pani Paudel, Jose J.
Guerrero, C\'edric Demonceaux and Luc Van Gool
- Abstract summary: This paper aims at learning category-specific 3D keypoints, in an unsupervised manner, using a collection of misaligned 3D point clouds of objects from an unknown category.
To the best of our knowledge, this is the first work on learning such keypoints directly from 3D point clouds.
- Score: 71.84892018102465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic discovery of category-specific 3D keypoints from a collection of
objects of some category is a challenging problem. One reason is that not all
objects in a category necessarily have the same semantic parts. The level of
difficulty adds up further when objects are represented by 3D point clouds,
with variations in shape and unknown coordinate frames. We define keypoints to
be category-specific, if they meaningfully represent objects' shape and their
correspondences can be simply established order-wise across all objects. This
paper aims at learning category-specific 3D keypoints, in an unsupervised
manner, using a collection of misaligned 3D point clouds of objects from an
unknown category. In order to do so, we model shapes defined by the keypoints,
within a category, using the symmetric linear basis shapes without assuming the
plane of symmetry to be known. The usage of symmetry prior leads us to learn
stable keypoints suitable for higher misalignments. To the best of our
knowledge, this is the first work on learning such keypoints directly from 3D
point clouds. Using categories from four benchmark datasets, we demonstrate the
quality of our learned keypoints by quantitative and qualitative evaluations.
Our experiments also show that the keypoints discovered by our method are
geometrically and semantically consistent.
Related papers
- SelfGeo: Self-supervised and Geodesic-consistent Estimation of Keypoints on Deformable Shapes [19.730602733938216]
"SelfGeo" is a self-supervised method that computes persistent 3D keypoints of non-rigid objects from arbitrary PCDs without the need of human annotations.
Our main contribution is to enforce that keypoints deform along with the shape while keeping constant geodesic distances among them.
We show experimentally that the use of geodesic has a clear advantage in challenging dynamic scenes.
arXiv Detail & Related papers (2024-08-05T08:00:30Z) - SNAKE: Shape-aware Neural 3D Keypoint Field [62.91169625183118]
Detecting 3D keypoints from point clouds is important for shape reconstruction.
This work investigates the dual question: can shape reconstruction benefit 3D keypoint detection?
We propose a novel unsupervised paradigm named SNAKE, which is short for shape-aware neural 3D keypoint field.
arXiv Detail & Related papers (2022-06-03T17:58:43Z) - Unsupervised Learning of 3D Semantic Keypoints with Mutual
Reconstruction [11.164069907549756]
3D semantic keypoints are category-level semantic consistent points on 3D objects.
We present an unsupervised method to generate consistent semantic keypoints from point clouds explicitly.
To the best of our knowledge, the proposed method is the first to mine 3D semantic consistent keypoints from a mutual reconstruction view.
arXiv Detail & Related papers (2022-03-19T01:49:21Z) - End-to-End Learning of Multi-category 3D Pose and Shape Estimation [128.881857704338]
We propose an end-to-end method that simultaneously detects 2D keypoints from an image and lifts them to 3D.
The proposed method learns both 2D detection and 3D lifting only from 2D keypoints annotations.
In addition to being end-to-end in image to 3D learning, our method also handles objects from multiple categories using a single neural network.
arXiv Detail & Related papers (2021-12-19T17:10:40Z) - KeypointDeformer: Unsupervised 3D Keypoint Discovery for Shape Control [64.46042014759671]
KeypointDeformer is an unsupervised method for shape control through automatically discovered 3D keypoints.
Our approach produces intuitive and semantically consistent control of shape deformations.
arXiv Detail & Related papers (2021-04-22T17:59:08Z) - Canonical 3D Deformer Maps: Unifying parametric and non-parametric
methods for dense weakly-supervised category reconstruction [79.98689027127855]
We propose a new representation of the 3D shape of common object categories that can be learned from a collection of 2D images of independent objects.
Our method builds in a novel way on concepts from parametric deformation models, non-parametric 3D reconstruction, and canonical embeddings.
It achieves state-of-the-art results in dense 3D reconstruction on public in-the-wild datasets of faces, cars, and birds.
arXiv Detail & Related papers (2020-08-28T15:44:05Z) - Fine-Grained 3D Shape Classification with Hierarchical Part-View
Attentions [70.0171362989609]
We propose a novel fine-grained 3D shape classification method named FG3D-Net to capture the fine-grained local details of 3D shapes from multiple rendered views.
Our results under the fine-grained 3D shape dataset show that our method outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2020-05-26T06:53:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.