Unsupervised Pose-Aware Part Decomposition for 3D Articulated Objects
- URL: http://arxiv.org/abs/2110.04411v1
- Date: Fri, 8 Oct 2021 23:53:56 GMT
- Title: Unsupervised Pose-Aware Part Decomposition for 3D Articulated Objects
- Authors: Yuki Kawana, Yusuke Mukuta, Tatsuya Harada
- Abstract summary: We propose PPD (unsupervised Pose-aware Part Decomposition) to address a novel setting that explicitly targets man-made articulated objects with mechanical joints.
We show that category-common prior learning for both part shapes and poses facilitates the unsupervised learning of (1) part decomposition with non-primitive-based implicit representation, and (2) part pose as joint parameters under single-frame shape supervision.
- Score: 68.73163598790255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Articulated objects exist widely in the real world. However, previous 3D
generative methods for unsupervised part decomposition are unsuitable for such
objects, because they assume a spatially fixed part location, resulting in
inconsistent part parsing. In this paper, we propose PPD (unsupervised
Pose-aware Part Decomposition) to address a novel setting that explicitly
targets man-made articulated objects with mechanical joints, considering the
part poses. We show that category-common prior learning for both part shapes
and poses facilitates the unsupervised learning of (1) part decomposition with
non-primitive-based implicit representation, and (2) part pose as joint
parameters under single-frame shape supervision. We evaluate our method on
synthetic and real datasets, and we show that it outperforms previous works in
consistent part parsing of the articulated objects based on comparable part
pose estimation performance to the supervised baseline.
Related papers
- OP-Align: Object-level and Part-level Alignment for Self-supervised Category-level Articulated Object Pose Estimation [7.022004731560844]
Category-level articulated object pose estimation focuses on the pose estimation of unknown articulated objects within known categories.
We propose a novel self-supervised approach that leverages a single-frame point cloud to solve this task.
Our model consistently generates reconstruction with a canonical pose and joint state for the entire input object.
arXiv Detail & Related papers (2024-08-29T14:10:14Z) - Articulate your NeRF: Unsupervised articulated object modeling via conditional view synthesis [24.007950839144918]
We propose an unsupervised method to learn the pose and part-segmentation of articulated objects with rigid parts.
Our method learns the geometry and appearance of object parts by using an implicit model from the first observation.
arXiv Detail & Related papers (2024-06-24T13:13:31Z) - Self-Supervised Category-Level Articulated Object Pose Estimation with
Part-Level SE(3) Equivariance [33.10167928198986]
Category-level articulated object pose estimation aims to estimate a hierarchy of articulation-aware object poses of an unseen articulated object from a known category.
We present a novel self-supervised strategy that solves this problem without any human labels.
arXiv Detail & Related papers (2023-02-28T03:02:11Z) - What's in your hands? 3D Reconstruction of Generic Objects in Hands [49.12461675219253]
Our work aims to reconstruct hand-held objects given a single RGB image.
In contrast to prior works that typically assume known 3D templates and reduce the problem to 3D pose estimation, our work reconstructs generic hand-held object without knowing their 3D templates.
arXiv Detail & Related papers (2022-04-14T17:59:02Z) - Watch It Move: Unsupervised Discovery of 3D Joints for Re-Posing of
Articulated Objects [73.23249640099516]
We learn both the appearance and the structure of previously unseen articulated objects by observing them move from multiple views.
Our insight is that adjacent parts that move relative to each other must be connected by a joint.
We show that our method works for different structures, from quadrupeds, to single-arm robots, to humans.
arXiv Detail & Related papers (2021-12-21T16:37:48Z) - Unsupervised Part Discovery from Contrastive Reconstruction [90.88501867321573]
The goal of self-supervised visual representation learning is to learn strong, transferable image representations.
We propose an unsupervised approach to object part discovery and segmentation.
Our method yields semantic parts consistent across fine-grained but visually distinct categories.
arXiv Detail & Related papers (2021-11-11T17:59:42Z) - Kinematic-Structure-Preserved Representation for Unsupervised 3D Human
Pose Estimation [58.72192168935338]
Generalizability of human pose estimation models developed using supervision on large-scale in-studio datasets remains questionable.
We propose a novel kinematic-structure-preserved unsupervised 3D pose estimation framework, which is not restrained by any paired or unpaired weak supervisions.
Our proposed model employs three consecutive differentiable transformations named as forward-kinematics, camera-projection and spatial-map transformation.
arXiv Detail & Related papers (2020-06-24T23:56:33Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.