Learning Part Motion of Articulated Objects Using Spatially Continuous
Neural Implicit Representations
- URL: http://arxiv.org/abs/2311.12407v1
- Date: Tue, 21 Nov 2023 07:54:40 GMT
- Title: Learning Part Motion of Articulated Objects Using Spatially Continuous
Neural Implicit Representations
- Authors: Yushi Du, Ruihai Wu, Yan Shen, Hao Dong
- Abstract summary: We introduce a novel framework that disentangles the part motion of articulated objects by predicting the transformation matrix of points on the part surface.
Our proposed framework is generic to different kinds of joint motions in that the transformation matrix can model diverse kinds of joint motions in the space.
- Score: 8.130629735939895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Articulated objects (e.g., doors and drawers) exist everywhere in our life.
Different from rigid objects, articulated objects have higher degrees of
freedom and are rich in geometries, semantics, and part functions. Modeling
different kinds of parts and articulations with nerual networks plays an
essential role in articulated object understanding and manipulation, and will
further benefit 3D vision and robotics communities. To model articulated
objects, most previous works directly encode articulated objects into feature
representations, without specific designs for parts, articulations and part
motions. In this paper, we introduce a novel framework that explicitly
disentangles the part motion of articulated objects by predicting the
transformation matrix of points on the part surface, using spatially continuous
neural implicit representations to model the part motion smoothly in the space.
More importantly, while many methods could only model a certain kind of joint
motion (such as the revolution in the clockwise order), our proposed framework
is generic to different kinds of joint motions in that transformation matrix
can model diverse kinds of joint motions in the space. Quantitative and
qualitative results of experiments over diverse categories of articulated
objects demonstrate the effectiveness of our proposed framework.
Related papers
- Unsupervised Dynamics Prediction with Object-Centric Kinematics [22.119612406160073]
We propose Object-Centric Kinematics (OCK), a framework for dynamics prediction leveraging object-centric representations.
OCK consists of low-level structured states of objects' position, velocity, and acceleration.
Our model demonstrates superior performance when handling objects and backgrounds in complex scenes characterized by a wide range of object attributes and dynamic movements.
arXiv Detail & Related papers (2024-04-29T04:47:23Z) - REACTO: Reconstructing Articulated Objects from a Single Video [64.89760223391573]
We propose a novel deformation model that enhances the rigidity of each part while maintaining flexible deformation of the joints.
Our method outperforms previous works in producing higher-fidelity 3D reconstructions of general articulated objects.
arXiv Detail & Related papers (2024-04-17T08:01:55Z) - Implicit Modeling of Non-rigid Objects with Cross-Category Signals [28.956412015920936]
MODIF is a multi-object deep implicit function that jointly learns the deformation fields and instance-specific latent codes for multiple objects at once.
We show that MODIF can proficiently learn the shape representation of each organ and their relations to others, to the point that shapes missing from unseen instances can be consistently recovered.
arXiv Detail & Related papers (2023-12-15T22:34:17Z) - GAMMA: Generalizable Articulation Modeling and Manipulation for
Articulated Objects [53.965581080954905]
We propose a novel framework of Generalizable Articulation Modeling and Manipulating for Articulated Objects (GAMMA)
GAMMA learns both articulation modeling and grasp pose affordance from diverse articulated objects with different categories.
Results show that GAMMA significantly outperforms SOTA articulation modeling and manipulation algorithms in unseen and cross-category articulated objects.
arXiv Detail & Related papers (2023-09-28T08:57:14Z) - ROAM: Robust and Object-Aware Motion Generation Using Neural Pose
Descriptors [73.26004792375556]
This paper shows that robustness and generalisation to novel scene objects in 3D object-aware character synthesis can be achieved by training a motion model with as few as one reference object.
We leverage an implicit feature representation trained on object-only datasets, which encodes an SE(3)-equivariant descriptor field around the object.
We demonstrate substantial improvements in 3D virtual character motion and interaction quality and robustness to scenarios with unseen objects.
arXiv Detail & Related papers (2023-08-24T17:59:51Z) - Unsupervised Kinematic Motion Detection for Part-segmented 3D Shape
Collections [14.899075941080541]
We present an unsupervised approach for discovering articulated motions in a part-segmented 3D shape collection.
Our approach is based on a concept we call category closure: any valid articulation of an object's parts should keep the object in the same semantic category.
We evaluate our approach by using it to re-discover part motions from the PartNet-Mobility dataset.
arXiv Detail & Related papers (2022-06-17T00:50:36Z) - MoDi: Unconditional Motion Synthesis from Diverse Data [51.676055380546494]
We present MoDi, an unconditional generative model that synthesizes diverse motions.
Our model is trained in a completely unsupervised setting from a diverse, unstructured and unlabeled motion dataset.
We show that despite the lack of any structure in the dataset, the latent space can be semantically clustered.
arXiv Detail & Related papers (2022-06-16T09:06:25Z) - Self-supervised Neural Articulated Shape and Appearance Models [18.99030452836038]
We propose a novel approach for learning a representation of the geometry, appearance, and motion of a class of articulated objects.
Our representation learns shape, appearance, and articulation codes that enable independent control of these semantic dimensions.
arXiv Detail & Related papers (2022-05-17T17:50:47Z) - SPAMs: Structured Implicit Parametric Models [30.19414242608965]
We learn Structured-implicit PArametric Models (SPAMs) as a deformable object representation that structurally decomposes non-rigid object motion into part-based disentangled representations of shape and pose.
Experiments demonstrate that our part-aware shape and pose understanding lead to state-of-the-art performance in reconstruction and tracking of depth sequences of complex deforming object motion.
arXiv Detail & Related papers (2022-01-20T12:33:46Z) - Watch It Move: Unsupervised Discovery of 3D Joints for Re-Posing of
Articulated Objects [73.23249640099516]
We learn both the appearance and the structure of previously unseen articulated objects by observing them move from multiple views.
Our insight is that adjacent parts that move relative to each other must be connected by a joint.
We show that our method works for different structures, from quadrupeds, to single-arm robots, to humans.
arXiv Detail & Related papers (2021-12-21T16:37:48Z) - Hierarchical Relational Inference [80.00374471991246]
We propose a novel approach to physical reasoning that models objects as hierarchies of parts that may locally behave separately, but also act more globally as a single whole.
Unlike prior approaches, our method learns in an unsupervised fashion directly from raw visual images.
It explicitly distinguishes multiple levels of abstraction and improves over a strong baseline at modeling synthetic and real-world videos.
arXiv Detail & Related papers (2020-10-07T20:19:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.