Category-Level Multi-Part Multi-Joint 3D Shape Assembly
- URL: http://arxiv.org/abs/2303.06163v1
- Date: Fri, 10 Mar 2023 19:02:26 GMT
- Title: Category-Level Multi-Part Multi-Joint 3D Shape Assembly
- Authors: Yichen Li, Kaichun Mo, Yueqi Duan, He Wang, Jiequan Zhang, Lin Shao,
Wojciech Matusik, Leonidas Guibas
- Abstract summary: We propose a hierarchical graph learning approach composed of two levels of graph representation learning.
The part graph takes part geometries as input to build the desired shape structure.
The joint-level graph uses part joints information and focuses on matching and aligning joints.
- Score: 36.74814134087434
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Shape assembly composes complex shapes geometries by arranging simple part
geometries and has wide applications in autonomous robotic assembly and CAD
modeling. Existing works focus on geometry reasoning and neglect the actual
physical assembly process of matching and fitting joints, which are the contact
surfaces connecting different parts. In this paper, we consider contacting
joints for the task of multi-part assembly. A successful joint-optimized
assembly needs to satisfy the bilateral objectives of shape structure and joint
alignment. We propose a hierarchical graph learning approach composed of two
levels of graph representation learning. The part graph takes part geometries
as input to build the desired shape structure. The joint-level graph uses part
joints information and focuses on matching and aligning joints. The two kinds
of information are combined to achieve the bilateral objectives. Extensive
experiments demonstrate that our method outperforms previous methods, achieving
better shape structure and higher joint alignment accuracy.
Related papers
- Str-L Pose: Integrating Point and Structured Line for Relative Pose Estimation in Dual-Graph [45.115555973941255]
Relative pose estimation is crucial for various computer vision applications, including Robotic and Autonomous Driving.
We propose a Geometric Correspondence Graph neural network that integrates point features with extra structured line segments.
This integration of matched points and line segments further exploits the geometry constraints and enhances model performance across different environments.
arXiv Detail & Related papers (2024-08-28T12:33:26Z) - 3D Geometric Shape Assembly via Efficient Point Cloud Matching [59.241448711254485]
We introduce Proxy Match Transform (PMT), an approximate high-order feature transform layer that enables reliable matching between mating surfaces of parts.
Building upon PMT, we introduce a new framework, dubbed Proxy Match TransformeR (PMTR), for the geometric assembly task.
We evaluate the proposed PMTR on the large-scale 3D geometric shape assembly benchmark dataset of Breaking Bad.
arXiv Detail & Related papers (2024-07-15T08:50:02Z) - Geometrically Consistent Partial Shape Matching [50.29468769172704]
Finding correspondences between 3D shapes is a crucial problem in computer vision and graphics.
An often neglected but essential property of matching geometrics is consistency.
We propose a novel integer linear programming partial shape matching formulation.
arXiv Detail & Related papers (2023-09-10T12:21:42Z) - The Shape Part Slot Machine: Contact-based Reasoning for Generating 3D
Shapes from Parts [33.924785333723115]
We present a new method for assembling novel 3D shapes from existing parts by performing contact-based reasoning.
Our method represents each shape as a graph of "slots," where each slot is a region of contact between two shape parts.
We demonstrate that our method generates shapes that outperform existing modeling-by-assembly approaches in terms of quality, diversity, and structural complexity.
arXiv Detail & Related papers (2021-12-01T15:54:54Z) - JoinABLe: Learning Bottom-up Assembly of Parametric CAD Joints [34.15876903985372]
JoinABLe is a learning-based method that assembles parts together to form joints.
Our results show that by making network predictions over a graph representation of solid models we can outperform multiple baseline methods with an accuracy (79.53%) that approaches human performance (80%)
arXiv Detail & Related papers (2021-11-24T20:05:59Z) - Joint Deep Multi-Graph Matching and 3D Geometry Learning from
Inhomogeneous 2D Image Collections [57.60094385551773]
We propose a trainable framework for learning a deformable 3D geometry model from inhomogeneous image collections.
We in addition obtain the underlying 3D geometry of the objects depicted in the 2D images.
arXiv Detail & Related papers (2021-03-31T17:25:36Z) - DSG-Net: Learning Disentangled Structure and Geometry for 3D Shape
Generation [98.96086261213578]
We introduce DSG-Net, a deep neural network that learns a disentangled structured and geometric mesh representation for 3D shapes.
This supports a range of novel shape generation applications with disentangled control, such as of structure (geometry) while keeping geometry (structure) unchanged.
Our method not only supports controllable generation applications but also produces high-quality synthesized shapes, outperforming state-of-the-art methods.
arXiv Detail & Related papers (2020-08-12T17:06:51Z) - COALESCE: Component Assembly by Learning to Synthesize Connections [45.120186220205994]
We introduce COALESCE, the first data-driven framework for component-based shape assembly.
We use a joint synthesis step, which is learned from data, to fill the gap and arrive at a natural and plausible part joint.
We demonstrate that our method significantly outperforms prior approaches including baseline deep models for 3D shape synthesis.
arXiv Detail & Related papers (2020-08-05T05:12:06Z) - Generative 3D Part Assembly via Dynamic Graph Learning [34.108515032411695]
Part assembly is a challenging yet crucial task in 3D computer vision and robotics.
We propose an assembly-oriented dynamic graph learning framework that leverages an iterative graph neural network as a backbone.
arXiv Detail & Related papers (2020-06-14T04:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.