COALESCE: Component Assembly by Learning to Synthesize Connections
- URL: http://arxiv.org/abs/2008.01936v2
- Date: Sun, 8 Nov 2020 08:11:55 GMT
- Title: COALESCE: Component Assembly by Learning to Synthesize Connections
- Authors: Kangxue Yin, Zhiqin Chen, Siddhartha Chaudhuri, Matthew Fisher,
Vladimir G. Kim, Hao Zhang
- Abstract summary: We introduce COALESCE, the first data-driven framework for component-based shape assembly.
We use a joint synthesis step, which is learned from data, to fill the gap and arrive at a natural and plausible part joint.
We demonstrate that our method significantly outperforms prior approaches including baseline deep models for 3D shape synthesis.
- Score: 45.120186220205994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce COALESCE, the first data-driven framework for component-based
shape assembly which employs deep learning to synthesize part connections. To
handle geometric and topological mismatches between parts, we remove the
mismatched portions via erosion, and rely on a joint synthesis step, which is
learned from data, to fill the gap and arrive at a natural and plausible part
joint. Given a set of input parts extracted from different objects, COALESCE
automatically aligns them and synthesizes plausible joints to connect the parts
into a coherent 3D object represented by a mesh. The joint synthesis network,
designed to focus on joint regions, reconstructs the surface between the parts
by predicting an implicit shape representation that agrees with existing parts,
while generating a smooth and topologically meaningful connection. We employ
test-time optimization to further ensure that the synthesized joint region
closely aligns with the input parts to create realistic component assemblies
from diverse input parts. We demonstrate that our method significantly
outperforms prior approaches including baseline deep models for 3D shape
synthesis, as well as state-of-the-art methods for shape completion.
Related papers
- ShapeMatcher: Self-Supervised Joint Shape Canonicalization,
Segmentation, Retrieval and Deformation [47.94499636697971]
We present ShapeMatcher, a unified self-supervised learning framework for joint shape canonicalization, segmentation, retrieval and deformation.
The key insight of ShapeMaker is the simultaneous training of the four highly-associated processes: canonicalization, segmentation, retrieval, and deformation.
arXiv Detail & Related papers (2023-11-18T15:44:57Z) - Geometrically Consistent Partial Shape Matching [50.29468769172704]
Finding correspondences between 3D shapes is a crucial problem in computer vision and graphics.
An often neglected but essential property of matching geometrics is consistency.
We propose a novel integer linear programming partial shape matching formulation.
arXiv Detail & Related papers (2023-09-10T12:21:42Z) - Building Rearticulable Models for Arbitrary 3D Objects from 4D Point
Clouds [28.330364666426345]
We build rearticulable models for arbitrary everyday man-made objects containing an arbitrary number of parts.
Our method identifies the distinct object parts, what parts are connected to what other parts, and the properties of the joints connecting each part pair.
arXiv Detail & Related papers (2023-06-01T17:59:21Z) - Attention-based Part Assembly for 3D Volumetric Shape Modeling [0.0]
We propose a VoxAttention network architecture for attention-based part assembly.
Experimental results show that our method outperforms most state-of-the-art methods for the part relation-aware 3D shape modeling task.
arXiv Detail & Related papers (2023-04-17T16:53:27Z) - Category-Level Multi-Part Multi-Joint 3D Shape Assembly [36.74814134087434]
We propose a hierarchical graph learning approach composed of two levels of graph representation learning.
The part graph takes part geometries as input to build the desired shape structure.
The joint-level graph uses part joints information and focuses on matching and aligning joints.
arXiv Detail & Related papers (2023-03-10T19:02:26Z) - ANISE: Assembly-based Neural Implicit Surface rEconstruction [12.745433575962842]
We present ANISE, a method that reconstructs a 3Dshape from partial observations (images or sparse point clouds)
The shape is formulated as an assembly of neural implicit functions, each representing a different part instance.
We demonstrate that, when performing reconstruction by decoding part representations into implicit functions, our method achieves state-of-the-art part-aware reconstruction results from both images and sparse point clouds.
arXiv Detail & Related papers (2022-05-27T00:01:40Z) - The Shape Part Slot Machine: Contact-based Reasoning for Generating 3D
Shapes from Parts [33.924785333723115]
We present a new method for assembling novel 3D shapes from existing parts by performing contact-based reasoning.
Our method represents each shape as a graph of "slots," where each slot is a region of contact between two shape parts.
We demonstrate that our method generates shapes that outperform existing modeling-by-assembly approaches in terms of quality, diversity, and structural complexity.
arXiv Detail & Related papers (2021-12-01T15:54:54Z) - Learning to Segment Human Body Parts with Synthetically Trained Deep
Convolutional Networks [58.0240970093372]
This paper presents a new framework for human body part segmentation based on Deep Convolutional Neural Networks trained using only synthetic data.
The proposed approach achieves cutting-edge results without the need of training the models with real annotated data of human body parts.
arXiv Detail & Related papers (2021-02-02T12:26:50Z) - Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from
a Single RGB Image [102.44347847154867]
We propose a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives.
Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives.
Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.
arXiv Detail & Related papers (2020-04-02T17:58:05Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.