A-SDF: Learning Disentangled Signed Distance Functions for Articulated
Shape Representation
- URL: http://arxiv.org/abs/2104.07645v1
- Date: Thu, 15 Apr 2021 17:53:54 GMT
- Title: A-SDF: Learning Disentangled Signed Distance Functions for Articulated
Shape Representation
- Authors: Jiteng Mu, Weichao Qiu, Adam Kortylewski, Alan Yuille, Nuno
Vasconcelos, Xiaolong Wang
- Abstract summary: We introduce Articulated Signed Distance Functions (A-SDF) to represent articulated shapes with a disentangled latent space.
We demonstrate our model generalize well to out-of-distribution and unseen data, e.g., partial point clouds and real-world depth images.
- Score: 62.517760545209065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has made significant progress on using implicit functions, as a
continuous representation for 3D rigid object shape reconstruction. However,
much less effort has been devoted to modeling general articulated objects.
Compared to rigid objects, articulated objects have higher degrees of freedom,
which makes it hard to generalize to unseen shapes. To deal with the large
shape variance, we introduce Articulated Signed Distance Functions (A-SDF) to
represent articulated shapes with a disentangled latent space, where we have
separate codes for encoding shape and articulation. We assume no prior
knowledge on part geometry, articulation status, joint type, joint axis, and
joint location. With this disentangled continuous representation, we
demonstrate that we can control the articulation input and animate unseen
instances with unseen joint angles. Furthermore, we propose a Test-Time
Adaptation inference algorithm to adjust our model during inference. We
demonstrate our model generalize well to out-of-distribution and unseen data,
e.g., partial point clouds and real-world depth images.
Related papers
- REACTO: Reconstructing Articulated Objects from a Single Video [64.89760223391573]
We propose a novel deformation model that enhances the rigidity of each part while maintaining flexible deformation of the joints.
Our method outperforms previous works in producing higher-fidelity 3D reconstructions of general articulated objects.
arXiv Detail & Related papers (2024-04-17T08:01:55Z) - Latent Partition Implicit with Surface Codes for 3D Representation [54.966603013209685]
We introduce a novel implicit representation to represent a single 3D shape as a set of parts in the latent space.
We name our method Latent Partition Implicit (LPI), because of its ability of casting the global shape modeling into multiple local part modeling.
arXiv Detail & Related papers (2022-07-18T14:24:46Z) - Deep Active Latent Surfaces for Medical Geometries [51.82897666576424]
Shape priors have long been known to be effective when reconstructing 3D shapes from noisy or incomplete data.
In this paper, we advocate a hybrid approach representing shapes in terms of 3D meshes with a separate latent vector at each vertex.
For inference, the latent vectors are updated independently while imposing spatial regularization constraints.
We show that this gives us both flexibility and generalization capabilities, which we demonstrate on several medical image processing tasks.
arXiv Detail & Related papers (2022-06-21T10:33:32Z) - Implicit Shape Completion via Adversarial Shape Priors [46.48590354256945]
We present a novel neural implicit shape method for partial point cloud completion.
We combine a conditional Deep-SDF architecture with learned, adversarial shape priors.
We train a PointNet++ discriminator that impels the generator to produce plausible, globally consistent reconstructions.
arXiv Detail & Related papers (2022-04-21T12:49:59Z) - GIFS: Neural Implicit Function for General Shape Representation [23.91110763447458]
General Implicit Function for 3D Shape (GIFS) is a novel method to represent general shapes.
Instead of dividing 3D space into predefined inside-outside regions, GIFS encodes whether two points are separated by any surface.
Experiments on ShapeNet show that GIFS outperforms previous state-of-the-art methods in terms of reconstruction quality, rendering efficiency, and visual fidelity.
arXiv Detail & Related papers (2022-04-14T17:29:20Z) - SPAMs: Structured Implicit Parametric Models [30.19414242608965]
We learn Structured-implicit PArametric Models (SPAMs) as a deformable object representation that structurally decomposes non-rigid object motion into part-based disentangled representations of shape and pose.
Experiments demonstrate that our part-aware shape and pose understanding lead to state-of-the-art performance in reconstruction and tracking of depth sequences of complex deforming object motion.
arXiv Detail & Related papers (2022-01-20T12:33:46Z) - Disentangled Implicit Shape and Pose Learning for Scalable 6D Pose
Estimation [44.8872454995923]
We present a novel approach for scalable 6D pose estimation, by self-supervised learning on synthetic data of multiple objects using a single autoencoder.
We test our method on two multi-object benchmarks with real data, T-LESS and NOCS REAL275, and show it outperforms existing RGB-based methods in terms of pose estimation accuracy and generalization.
arXiv Detail & Related papers (2021-07-27T01:55:30Z) - Neural Articulated Radiance Field [90.91714894044253]
We present Neural Articulated Radiance Field (NARF), a novel deformable 3D representation for articulated objects learned from images.
Experiments show that the proposed method is efficient and can generalize well to novel poses.
arXiv Detail & Related papers (2021-04-07T13:23:14Z) - Deep Implicit Templates for 3D Shape Representation [70.9789507686618]
We propose a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations.
Our key idea is to formulate DIFs as conditional deformations of a template implicit function.
We show that our method can not only learn a common implicit template for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously without any supervision.
arXiv Detail & Related papers (2020-11-30T06:01:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.