Combinatorial 3D Shape Generation via Sequential Assembly
- URL: http://arxiv.org/abs/2004.07414v2
- Date: Wed, 25 Nov 2020 03:51:49 GMT
- Title: Combinatorial 3D Shape Generation via Sequential Assembly
- Authors: Jungtaek Kim, Hyunsoo Chung, Jinhwi Lee, Minsu Cho, Jaesik Park
- Abstract summary: Sequential assembly with geometric primitives has drawn attention in robotics and 3D vision since it yields a practical blueprint to construct a target shape.
We propose a 3D shape generation framework to alleviate this consequence induced by a huge number of feasible combinations.
Experimental results demonstrate that our method successfully generates 3D shapes and simulates more realistic generation processes.
- Score: 40.2815083025929
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sequential assembly with geometric primitives has drawn attention in robotics
and 3D vision since it yields a practical blueprint to construct a target
shape. However, due to its combinatorial property, a greedy method falls short
of generating a sequence of volumetric primitives. To alleviate this
consequence induced by a huge number of feasible combinations, we propose a
combinatorial 3D shape generation framework. The proposed framework reflects an
important aspect of human generation processes in real life -- we often create
a 3D shape by sequentially assembling unit primitives with geometric
constraints. To find the desired combination regarding combination evaluations,
we adopt Bayesian optimization, which is able to exploit and explore
efficiently the feasible regions constrained by the current primitive
placements. An evaluation function conveys global structure guidance for an
assembly process and stability in terms of gravity and external forces
simultaneously. Experimental results demonstrate that our method successfully
generates combinatorial 3D shapes and simulates more realistic generation
processes. We also introduce a new dataset for combinatorial 3D shape
generation. All the codes are available at
\url{https://github.com/POSTECH-CVLab/Combinatorial-3D-Shape-Generation}.
Related papers
- CompGS: Unleashing 2D Compositionality for Compositional Text-to-3D via Dynamically Optimizing 3D Gaussians [97.15119679296954]
CompGS is a novel generative framework that employs 3D Gaussian Splatting (GS) for efficient, compositional text-to-3D content generation.
CompGS can be easily extended to controllable 3D editing, facilitating scene generation.
arXiv Detail & Related papers (2024-10-28T04:35:14Z) - NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - ComboVerse: Compositional 3D Assets Creation Using Spatially-Aware Diffusion Guidance [76.7746870349809]
We present ComboVerse, a 3D generation framework that produces high-quality 3D assets with complex compositions by learning to combine multiple models.
Our proposed framework emphasizes spatial alignment of objects, compared with standard score distillation sampling.
arXiv Detail & Related papers (2024-03-19T03:39:43Z) - A Scalable Combinatorial Solver for Elastic Geometrically Consistent 3D
Shape Matching [69.14632473279651]
We present a scalable algorithm for globally optimizing over the space of geometrically consistent mappings between 3D shapes.
We propose a novel primal coupled with a Lagrange dual problem that is several orders of magnitudes faster than previous solvers.
arXiv Detail & Related papers (2022-04-27T09:47:47Z) - GLASS: Geometric Latent Augmentation for Shape Spaces [28.533018136138825]
We use geometrically motivated energies to augment and thus boost a sparse collection of example (training) models.
We analyze the Hessian of the as-rigid-as-possible (ARAP) energy to sample from and project to the underlying (local) shape space.
We present multiple examples of interesting and meaningful shape variations even when starting from as few as 3-10 training shapes.
arXiv Detail & Related papers (2021-08-06T17:56:23Z) - Learning to generate shape from global-local spectra [0.0]
We build our method on top of recent advances on the so called shape-from-spectrum paradigm.
We consider the spectrum as a natural and ready to use representation to encode variability of the shapes.
Our results confirm the improvement of the proposed approach in comparison to existing and alternative methods.
arXiv Detail & Related papers (2021-08-04T16:39:56Z) - Training Data Generating Networks: Shape Reconstruction via Bi-level
Optimization [52.17872739634213]
We propose a novel 3d shape representation for 3d shape reconstruction from a single image.
We train a network to generate a training set which will be fed into another learning algorithm to define the shape.
arXiv Detail & Related papers (2020-10-16T09:52:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.