Learning Generative Models of Shape Handles
- URL: http://arxiv.org/abs/2004.03028v1
- Date: Mon, 6 Apr 2020 22:35:55 GMT
- Title: Learning Generative Models of Shape Handles
- Authors: Matheus Gadelha, Giorgio Gori, Duygu Ceylan, Radomir Mech, Nathan
Carr, Tamy Boubekeur, Rui Wang, Subhransu Maji
- Abstract summary: We present a generative model to synthesize 3D shapes as sets of handles.
Our model can generate handle sets with varying cardinality and different types of handles.
We show that the resulting shape representations are intuitive and achieve superior quality than previous state-of-the-art.
- Score: 43.41382075567803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a generative model to synthesize 3D shapes as sets of handles --
lightweight proxies that approximate the original 3D shape -- for applications
in interactive editing, shape parsing, and building compact 3D representations.
Our model can generate handle sets with varying cardinality and different types
of handles (Figure 1). Key to our approach is a deep architecture that predicts
both the parameters and existence of shape handles, and a novel similarity
measure that can easily accommodate different types of handles, such as cuboids
or sphere-meshes. We leverage the recent advances in semantic 3D annotation as
well as automatic shape summarizing techniques to supervise our approach. We
show that the resulting shape representations are intuitive and achieve
superior quality than previous state-of-the-art. Finally, we demonstrate how
our method can be used in applications such as interactive shape editing,
completion, and interpolation, leveraging the latent space learned by our model
to guide these tasks. Project page: http://mgadelha.me/shapehandles.
Related papers
- DECOLLAGE: 3D Detailization by Controllable, Localized, and Learned Geometry Enhancement [38.719572669042925]
We present a 3D modeling method which enables end-users to refine or detailize 3D shapes using machine learning.
We show that our ability to localize details enables novel interactive creative and applications.
arXiv Detail & Related papers (2024-09-10T00:51:49Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - PHRIT: Parametric Hand Representation with Implicit Template [24.699079936958892]
PHRIT is a novel approach for parametric hand mesh modeling with an implicit template.
Our method represents deformable hand shapes using signed distance fields (SDFs) with part-based shape priors.
We evaluate PHRIT on multiple downstream tasks, including skeleton-driven hand reconstruction, shapes from point clouds, and single-view 3D reconstruction.
arXiv Detail & Related papers (2023-09-26T13:22:33Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation [89.47132156950194]
We present a novel framework built to simplify 3D asset generation for amateur users.
Our method supports a variety of input modalities that can be easily provided by a human.
Our model can combine all these tasks into one swiss-army-knife tool.
arXiv Detail & Related papers (2022-12-08T18:59:05Z) - Learning to Generate 3D Shapes from a Single Example [28.707149807472685]
We present a multi-scale GAN-based model designed to capture the input shape's geometric features across a range of spatial scales.
We train our generative model on a voxel pyramid of the reference shape, without the need of any external supervision or manual annotation.
The resulting shapes present variations across different scales, and at the same time retain the global structure of the reference shape.
arXiv Detail & Related papers (2022-08-05T01:05:32Z) - SP-GAN: Sphere-Guided 3D Shape Generation and Manipulation [50.53931728235875]
We present SP-GAN, a new unsupervised sphere-guided generative model for direct synthesis of 3D shapes in the form of point clouds.
Compared with existing models, SP-GAN is able to synthesize diverse and high-quality shapes with fine details.
arXiv Detail & Related papers (2021-08-10T06:49:45Z) - ShapeAssembly: Learning to Generate Programs for 3D Shape Structure
Synthesis [38.27280837835169]
We propose ShapeAssembly, a domain-specific "assembly-language" for 3D shape structures.
We show how to extract ShapeAssembly programs from existing shape structures in the PartNet dataset.
We evaluate our approach by comparing shapes output by our generated programs to those from other recent shape structure models.
arXiv Detail & Related papers (2020-09-17T02:26:45Z) - DualSDF: Semantic Shape Manipulation using a Two-Level Representation [54.62411904952258]
We propose DualSDF, a representation expressing shapes at two levels of granularity, one capturing fine details and the other representing an abstracted proxy shape.
Our two-level model gives rise to a new shape manipulation technique in which a user can interactively manipulate the coarse proxy shape and see the changes instantly mirrored in the high-resolution shape.
arXiv Detail & Related papers (2020-04-06T17:59:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.