Neural Shape Mating: Self-Supervised Object Assembly with Adversarial
Shape Priors
- URL: http://arxiv.org/abs/2205.14886v1
- Date: Mon, 30 May 2022 06:58:01 GMT
- Title: Neural Shape Mating: Self-Supervised Object Assembly with Adversarial
Shape Priors
- Authors: Yun-Chun Chen, Haoda Li, Dylan Turpin, Alec Jacobson, Animesh Garg
- Abstract summary: We introduce a novel task, pairwise 3D geometric shape mating, and propose Neural Shape Mating (NSM) to tackle this problem.
Given the point clouds of two object parts of an unknown category, NSM learns to reason about the fit of the two parts and predict a pair of 3D poses that tightly mate them together.
We present a self-supervised data collection pipeline that generates pairwise shape mating data with ground truth by randomly cutting an object mesh into two parts.
- Score: 45.187868277839314
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning to autonomously assemble shapes is a crucial skill for many robotic
applications. While the majority of existing part assembly methods focus on
correctly posing semantic parts to recreate a whole object, we interpret
assembly more literally: as mating geometric parts together to achieve a snug
fit. By focusing on shape alignment rather than semantic cues, we can achieve
across-category generalization. In this paper, we introduce a novel task,
pairwise 3D geometric shape mating, and propose Neural Shape Mating (NSM) to
tackle this problem. Given the point clouds of two object parts of an unknown
category, NSM learns to reason about the fit of the two parts and predict a
pair of 3D poses that tightly mate them together. We couple the training of NSM
with an implicit shape reconstruction task to make NSM more robust to imperfect
point cloud observations. To train NSM, we present a self-supervised data
collection pipeline that generates pairwise shape mating data with ground truth
by randomly cutting an object mesh into two parts, resulting in a dataset that
consists of 200K shape mating pairs from numerous object meshes with diverse
cut types. We train NSM on the collected dataset and compare it with several
point cloud registration methods and one part assembly baseline. Extensive
experimental results and ablation studies under various settings demonstrate
the effectiveness of the proposed algorithm. Additional material is available
at: https://neural-shape-mating.github.io/
Related papers
- Self-supervised 3D Point Cloud Completion via Multi-view Adversarial Learning [61.14132533712537]
We propose MAL-SPC, a framework that effectively leverages both object-level and category-specific geometric similarities to complete missing structures.
Our MAL-SPC does not require any 3D complete supervision and only necessitates a single partial point cloud for each object.
arXiv Detail & Related papers (2024-07-13T06:53:39Z) - Neural Implicit Representation for Building Digital Twins of Unknown Articulated Objects [42.32306418464438]
We address the problem of building digital twins of unknown articulated objects from two RGBD scans of the object at different articulation states.
Our method first reconstructs object-level shape at each state, then recovers the underlying articulation model.
It also handles more than one movable part and does not rely on any object shape or structure priors.
arXiv Detail & Related papers (2024-04-01T19:23:00Z) - Building Rearticulable Models for Arbitrary 3D Objects from 4D Point
Clouds [28.330364666426345]
We build rearticulable models for arbitrary everyday man-made objects containing an arbitrary number of parts.
Our method identifies the distinct object parts, what parts are connected to what other parts, and the properties of the joints connecting each part pair.
arXiv Detail & Related papers (2023-06-01T17:59:21Z) - NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One
Go [109.88509362837475]
We present NeuroMorph, a new neural network architecture that takes as input two 3D shapes.
NeuroMorph produces smooth and point-to-point correspondences between them.
It works well for a large variety of input shapes, including non-isometric pairs from different object categories.
arXiv Detail & Related papers (2021-06-17T12:25:44Z) - Learning Geometry-Disentangled Representation for Complementary
Understanding of 3D Object Point Cloud [50.56461318879761]
We propose Geometry-Disentangled Attention Network (GDANet) for 3D image processing.
GDANet disentangles point clouds into contour and flat part of 3D objects, respectively denoted by sharp and gentle variation components.
Experiments on 3D object classification and segmentation benchmarks demonstrate that GDANet achieves the state-of-the-arts with fewer parameters.
arXiv Detail & Related papers (2020-12-20T13:35:00Z) - Refinement of Predicted Missing Parts Enhance Point Cloud Completion [62.997667081978825]
Point cloud completion is the task of predicting complete geometry from partial observations using a point set representation for a 3D shape.
Previous approaches propose neural networks to directly estimate the whole point cloud through encoder-decoder models fed by the incomplete point set.
This paper proposes an end-to-end neural network architecture that focuses on computing the missing geometry and merging the known input and the predicted point cloud.
arXiv Detail & Related papers (2020-10-08T22:01:23Z) - Generative 3D Part Assembly via Dynamic Graph Learning [34.108515032411695]
Part assembly is a challenging yet crucial task in 3D computer vision and robotics.
We propose an assembly-oriented dynamic graph learning framework that leverages an iterative graph neural network as a backbone.
arXiv Detail & Related papers (2020-06-14T04:26:42Z) - Learning 3D Human Shape and Pose from Dense Body Parts [117.46290013548533]
We propose a Decompose-and-aggregate Network (DaNet) to learn 3D human shape and pose from dense correspondences of body parts.
Messages from local streams are aggregated to enhance the robust prediction of the rotation-based poses.
Our method is validated on both indoor and real-world datasets including Human3.6M, UP3D, COCO, and 3DPW.
arXiv Detail & Related papers (2019-12-31T15:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.