Learning Canonical Shape Space for Category-Level 6D Object Pose and
Size Estimation
- URL: http://arxiv.org/abs/2001.09322v3
- Date: Sun, 21 Nov 2021 08:38:17 GMT
- Title: Learning Canonical Shape Space for Category-Level 6D Object Pose and
Size Estimation
- Authors: Dengsheng Chen and Jun Li and Zheng Wang and Kai Xu
- Abstract summary: We learn canonical shape space (CASS), a unified representation for a large variety of instances of a certain object category.
We train a variational auto-encoder (VAE) for generating 3D point clouds in the canonical space from an RGBD image.
VAE is trained in a cross-category fashion, exploiting the publicly available large 3D shape repositories.
- Score: 21.7030393344051
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel approach to category-level 6D object pose and size
estimation. To tackle intra-class shape variations, we learn canonical shape
space (CASS), a unified representation for a large variety of instances of a
certain object category. In particular, CASS is modeled as the latent space of
a deep generative model of canonical 3D shapes with normalized pose. We train a
variational auto-encoder (VAE) for generating 3D point clouds in the canonical
space from an RGBD image. The VAE is trained in a cross-category fashion,
exploiting the publicly available large 3D shape repositories. Since the 3D
point cloud is generated in normalized pose (with actual size), the encoder of
the VAE learns view-factorized RGBD embedding. It maps an RGBD image in
arbitrary view into a pose-independent 3D shape representation. Object pose is
then estimated via contrasting it with a pose-dependent feature of the input
RGBD extracted with a separate deep neural networks. We integrate the learning
of CASS and pose and size estimation into an end-to-end trainable network,
achieving the state-of-the-art performance.
Related papers
- Unsupervised Learning of Category-Level 3D Pose from Object-Centric Videos [15.532504015622159]
Category-level 3D pose estimation is a fundamentally important problem in computer vision and robotics.
We tackle the problem of learning to estimate the category-level 3D pose only from casually taken object-centric videos.
arXiv Detail & Related papers (2024-07-05T09:43:05Z) - Self-Supervised Geometric Correspondence for Category-Level 6D Object
Pose Estimation in the Wild [47.80637472803838]
We introduce a self-supervised learning approach trained directly on large-scale real-world object videos for category-level 6D pose estimation in the wild.
Our framework reconstructs the canonical 3D shape of an object category and learns dense correspondences between input images and the canonical shape via surface embedding.
Surprisingly, our method, without any human annotations or simulators, can achieve on-par or even better performance than previous supervised or semi-supervised methods on in-the-wild images.
arXiv Detail & Related papers (2022-10-13T17:19:22Z) - Generative Category-Level Shape and Pose Estimation with Semantic
Primitives [27.692997522812615]
We propose a novel framework for category-level object shape and pose estimation from a single RGB-D image.
To handle the intra-category variation, we adopt a semantic primitive representation that encodes diverse shapes into a unified latent space.
We show that the proposed method achieves SOTA pose estimation performance and better generalization in the real-world dataset.
arXiv Detail & Related papers (2022-10-03T17:51:54Z) - ConDor: Self-Supervised Canonicalization of 3D Pose for Partial Shapes [55.689763519293464]
ConDor is a self-supervised method that learns to canonicalize the 3D orientation and position for full and partial 3D point clouds.
During inference, our method takes an unseen full or partial 3D point cloud at an arbitrary pose and outputs an equivariant canonical pose.
arXiv Detail & Related papers (2022-01-19T18:57:21Z) - Learning Canonical 3D Object Representation for Fine-Grained Recognition [77.33501114409036]
We propose a novel framework for fine-grained object recognition that learns to recover object variation in 3D space from a single image.
We represent an object as a composition of 3D shape and its appearance, while eliminating the effect of camera viewpoint.
By incorporating 3D shape and appearance jointly in a deep representation, our method learns the discriminative representation of the object.
arXiv Detail & Related papers (2021-08-10T12:19:34Z) - Sparse Pose Trajectory Completion [87.31270669154452]
We propose a method to learn, even using a dataset where objects appear only in sparsely sampled views.
This is achieved with a cross-modal pose trajectory transfer mechanism.
Our method is evaluated on the Pix3D and ShapeNet datasets.
arXiv Detail & Related papers (2021-05-01T00:07:21Z) - Weakly Supervised Learning of Multi-Object 3D Scene Decompositions Using
Deep Shape Priors [69.02332607843569]
PriSMONet is a novel approach for learning Multi-Object 3D scene decomposition and representations from single images.
A recurrent encoder regresses a latent representation of 3D shape, pose and texture of each object from an input RGB image.
We evaluate the accuracy of our model in inferring 3D scene layout, demonstrate its generative capabilities, assess its generalization to real images, and point out benefits of the learned representation.
arXiv Detail & Related papers (2020-10-08T14:49:23Z) - Canonical 3D Deformer Maps: Unifying parametric and non-parametric
methods for dense weakly-supervised category reconstruction [79.98689027127855]
We propose a new representation of the 3D shape of common object categories that can be learned from a collection of 2D images of independent objects.
Our method builds in a novel way on concepts from parametric deformation models, non-parametric 3D reconstruction, and canonical embeddings.
It achieves state-of-the-art results in dense 3D reconstruction on public in-the-wild datasets of faces, cars, and birds.
arXiv Detail & Related papers (2020-08-28T15:44:05Z) - Shape Prior Deformation for Categorical 6D Object Pose and Size
Estimation [62.618227434286]
We present a novel learning approach to recover the 6D poses and sizes of unseen object instances from an RGB-D image.
We propose a deep network to reconstruct the 3D object model by explicitly modeling the deformation from a pre-learned categorical shape prior.
arXiv Detail & Related papers (2020-07-16T16:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.