Unsupervised Kinematic Motion Detection for Part-segmented 3D Shape
Collections
- URL: http://arxiv.org/abs/2206.08497v1
- Date: Fri, 17 Jun 2022 00:50:36 GMT
- Title: Unsupervised Kinematic Motion Detection for Part-segmented 3D Shape
Collections
- Authors: Xianghao Xu, Yifan Ruan, Srinath Sridhar, Daniel Ritchie
- Abstract summary: We present an unsupervised approach for discovering articulated motions in a part-segmented 3D shape collection.
Our approach is based on a concept we call category closure: any valid articulation of an object's parts should keep the object in the same semantic category.
We evaluate our approach by using it to re-discover part motions from the PartNet-Mobility dataset.
- Score: 14.899075941080541
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: 3D models of manufactured objects are important for populating virtual worlds
and for synthetic data generation for vision and robotics. To be most useful,
such objects should be articulated: their parts should move when interacted
with. While articulated object datasets exist, creating them is
labor-intensive. Learning-based prediction of part motions can help, but all
existing methods require annotated training data. In this paper, we present an
unsupervised approach for discovering articulated motions in a part-segmented
3D shape collection. Our approach is based on a concept we call category
closure: any valid articulation of an object's parts should keep the object in
the same semantic category (e.g. a chair stays a chair). We operationalize this
concept with an algorithm that optimizes a shape's part motion parameters such
that it can transform into other shapes in the collection. We evaluate our
approach by using it to re-discover part motions from the PartNet-Mobility
dataset. For almost all shape categories, our method's predicted motion
parameters have low error with respect to ground truth annotations,
outperforming two supervised motion prediction methods.
Related papers
- CAGE: Controllable Articulation GEneration [14.002289666443529]
We leverage the interplay between part shape, connectivity, and motion using a denoising diffusion-based method.
Our method takes an object category label and a part connectivity graph as input and generates an object's geometry and motion parameters.
Our experiments show that our method outperforms the state-of-the-art in articulated object generation.
arXiv Detail & Related papers (2023-12-15T07:04:27Z) - ROAM: Robust and Object-Aware Motion Generation Using Neural Pose
Descriptors [73.26004792375556]
This paper shows that robustness and generalisation to novel scene objects in 3D object-aware character synthesis can be achieved by training a motion model with as few as one reference object.
We leverage an implicit feature representation trained on object-only datasets, which encodes an SE(3)-equivariant descriptor field around the object.
We demonstrate substantial improvements in 3D virtual character motion and interaction quality and robustness to scenarios with unseen objects.
arXiv Detail & Related papers (2023-08-24T17:59:51Z) - PARIS: Part-level Reconstruction and Motion Analysis for Articulated
Objects [17.191728053966873]
We address the task of simultaneous part-level reconstruction and motion parameter estimation for articulated objects.
We present PARIS: a self-supervised, end-to-end architecture that learns part-level implicit shape and appearance models.
Our method generalizes better across object categories, and outperforms baselines and prior work that are given 3D point clouds as input.
arXiv Detail & Related papers (2023-08-14T18:18:00Z) - Building Rearticulable Models for Arbitrary 3D Objects from 4D Point
Clouds [28.330364666426345]
We build rearticulable models for arbitrary everyday man-made objects containing an arbitrary number of parts.
Our method identifies the distinct object parts, what parts are connected to what other parts, and the properties of the joints connecting each part pair.
arXiv Detail & Related papers (2023-06-01T17:59:21Z) - ShapeShift: Superquadric-based Object Pose Estimation for Robotic
Grasping [85.38689479346276]
Current techniques heavily rely on a reference 3D object, limiting their generalizability and making it expensive to expand to new object categories.
This paper proposes ShapeShift, a superquadric-based framework for object pose estimation that predicts the object's pose relative to a primitive shape which is fitted to the object.
arXiv Detail & Related papers (2023-04-10T20:55:41Z) - Semi-Weakly Supervised Object Kinematic Motion Prediction [56.282759127180306]
Given a 3D object, kinematic motion prediction aims to identify the mobile parts as well as the corresponding motion parameters.
We propose a graph neural network to learn the map between hierarchical part-level segmentation and mobile parts parameters.
The network predictions yield a large scale of 3D objects with pseudo labeled mobility information.
arXiv Detail & Related papers (2023-03-31T02:37:36Z) - Segmenting Moving Objects via an Object-Centric Layered Representation [100.26138772664811]
We introduce an object-centric segmentation model with a depth-ordered layer representation.
We introduce a scalable pipeline for generating synthetic training data with multiple objects.
We evaluate the model on standard video segmentation benchmarks.
arXiv Detail & Related papers (2022-07-05T17:59:43Z) - Discovering Objects that Can Move [55.743225595012966]
We study the problem of object discovery -- separating objects from the background without manual labels.
Existing approaches utilize appearance cues, such as color, texture, and location, to group pixels into object-like regions.
We choose to focus on dynamic objects -- entities that can move independently in the world.
arXiv Detail & Related papers (2022-03-18T21:13:56Z) - DyStaB: Unsupervised Object Segmentation via Dynamic-Static
Bootstrapping [72.84991726271024]
We describe an unsupervised method to detect and segment portions of images of live scenes that are seen moving as a coherent whole.
Our method first partitions the motion field by minimizing the mutual information between segments.
It uses the segments to learn object models that can be used for detection in a static image.
arXiv Detail & Related papers (2020-08-16T22:05:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.