Building Rearticulable Models for Arbitrary 3D Objects from 4D Point
Clouds
- URL: http://arxiv.org/abs/2306.00979v1
- Date: Thu, 1 Jun 2023 17:59:21 GMT
- Title: Building Rearticulable Models for Arbitrary 3D Objects from 4D Point
Clouds
- Authors: Shaowei Liu, Saurabh Gupta, Shenlong Wang
- Abstract summary: We build rearticulable models for arbitrary everyday man-made objects containing an arbitrary number of parts.
Our method identifies the distinct object parts, what parts are connected to what other parts, and the properties of the joints connecting each part pair.
- Score: 28.330364666426345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We build rearticulable models for arbitrary everyday man-made objects
containing an arbitrary number of parts that are connected together in
arbitrary ways via 1 degree-of-freedom joints. Given point cloud videos of such
everyday objects, our method identifies the distinct object parts, what parts
are connected to what other parts, and the properties of the joints connecting
each part pair. We do this by jointly optimizing the part segmentation,
transformation, and kinematics using a novel energy minimization framework. Our
inferred animatable models, enables retargeting to novel poses with sparse
point correspondences guidance. We test our method on a new articulating robot
dataset, and the Sapiens dataset with common daily objects, as well as
real-world scans. Experiments show that our method outperforms two leading
prior works on various metrics.
Related papers
- PickScan: Object discovery and reconstruction from handheld interactions [99.99566882133179]
We develop an interaction-guided and class-agnostic method to reconstruct 3D representations of scenes.
Our main contribution is a novel approach to detecting user-object interactions and extracting the masks of manipulated objects.
Compared to Co-Fusion, the only comparable interaction-based and class-agnostic baseline, this corresponds to a reduction in chamfer distance of 73%.
arXiv Detail & Related papers (2024-11-17T23:09:08Z) - Hierarchically Structured Neural Bones for Reconstructing Animatable Objects from Casual Videos [37.455535904703204]
We propose a new framework for creating and manipulating 3D models of arbitrary objects using casually captured videos.
Our core ingredient is a novel deformation hierarchy model, which captures motions of objects with a tree-structured bones.
Our framework offers several clear advantages: (1) users can obtain animatable 3D models of the arbitrary objects in improved quality from their casual videos, (2) users can manipulate 3D models in an intuitive manner with minimal costs, and (3) users can interactively add or delete control points as necessary.
arXiv Detail & Related papers (2024-08-01T07:42:45Z) - Articulate your NeRF: Unsupervised articulated object modeling via conditional view synthesis [24.007950839144918]
We propose an unsupervised method to learn the pose and part-segmentation of articulated objects with rigid parts.
Our method learns the geometry and appearance of object parts by using an implicit model from the first observation.
arXiv Detail & Related papers (2024-06-24T13:13:31Z) - CAGE: Controllable Articulation GEneration [14.002289666443529]
We leverage the interplay between part shape, connectivity, and motion using a denoising diffusion-based method.
Our method takes an object category label and a part connectivity graph as input and generates an object's geometry and motion parameters.
Our experiments show that our method outperforms the state-of-the-art in articulated object generation.
arXiv Detail & Related papers (2023-12-15T07:04:27Z) - ROAM: Robust and Object-Aware Motion Generation Using Neural Pose
Descriptors [73.26004792375556]
This paper shows that robustness and generalisation to novel scene objects in 3D object-aware character synthesis can be achieved by training a motion model with as few as one reference object.
We leverage an implicit feature representation trained on object-only datasets, which encodes an SE(3)-equivariant descriptor field around the object.
We demonstrate substantial improvements in 3D virtual character motion and interaction quality and robustness to scenarios with unseen objects.
arXiv Detail & Related papers (2023-08-24T17:59:51Z) - CA$^2$T-Net: Category-Agnostic 3D Articulation Transfer from Single
Image [41.70960551470232]
We present a neural network approach to transfer the motion from a single image of an articulated object to a rest-state (i.e., unarticulated) 3D model.
Our network learns to predict the object's pose, part segmentation, and corresponding motion parameters to reproduce the articulation shown in the input image.
arXiv Detail & Related papers (2023-01-05T18:57:12Z) - MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare [84.80956484848505]
MegaPose is a method to estimate the 6D pose of novel objects, that is, objects unseen during training.
We present a 6D pose refiner based on a render&compare strategy which can be applied to novel objects.
Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner.
arXiv Detail & Related papers (2022-12-13T19:30:03Z) - 3DMODT: Attention-Guided Affinities for Joint Detection & Tracking in 3D
Point Clouds [95.54285993019843]
We propose a method for joint detection and tracking of multiple objects in 3D point clouds.
Our model exploits temporal information employing multiple frames to detect objects and track them in a single network.
arXiv Detail & Related papers (2022-11-01T20:59:38Z) - Unsupervised Kinematic Motion Detection for Part-segmented 3D Shape
Collections [14.899075941080541]
We present an unsupervised approach for discovering articulated motions in a part-segmented 3D shape collection.
Our approach is based on a concept we call category closure: any valid articulation of an object's parts should keep the object in the same semantic category.
We evaluate our approach by using it to re-discover part motions from the PartNet-Mobility dataset.
arXiv Detail & Related papers (2022-06-17T00:50:36Z) - Watch It Move: Unsupervised Discovery of 3D Joints for Re-Posing of
Articulated Objects [73.23249640099516]
We learn both the appearance and the structure of previously unseen articulated objects by observing them move from multiple views.
Our insight is that adjacent parts that move relative to each other must be connected by a joint.
We show that our method works for different structures, from quadrupeds, to single-arm robots, to humans.
arXiv Detail & Related papers (2021-12-21T16:37:48Z) - MultiBodySync: Multi-Body Segmentation and Motion Estimation via 3D Scan
Synchronization [61.015704878681795]
We present a novel, end-to-end trainable multi-body motion segmentation and rigid registration framework for 3D point clouds.
The two non-trivial challenges posed by this multi-scan multibody setting are.
guaranteeing correspondence and segmentation consistency across multiple input point clouds and.
obtaining robust motion-based rigid body segmentation applicable to novel object categories.
arXiv Detail & Related papers (2021-01-17T06:36:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.