DECOR-GAN: 3D Shape Detailization by Conditional Refinement
- URL: http://arxiv.org/abs/2012.09159v2
- Date: Mon, 29 Mar 2021 03:04:08 GMT
- Title: DECOR-GAN: 3D Shape Detailization by Conditional Refinement
- Authors: Zhiqin Chen, Vladimir G. Kim, Matthew Fisher, Noam Aigerman, Hao
Zhang, Siddhartha Chaudhuri
- Abstract summary: We introduce a deep generative network for 3D shape detailization, akin to stylization with the style being geometric details.
We demonstrate that our method can refine a coarse shape into a variety of detailed shapes with different styles.
- Score: 50.8801457082181
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a deep generative network for 3D shape detailization, akin to
stylization with the style being geometric details. We address the challenge of
creating large varieties of high-resolution and detailed 3D geometry from a
small set of exemplars by treating the problem as that of geometric detail
transfer. Given a low-resolution coarse voxel shape, our network refines it,
via voxel upsampling, into a higher-resolution shape enriched with geometric
details. The output shape preserves the overall structure (or content) of the
input, while its detail generation is conditioned on an input "style code"
corresponding to a detailed exemplar. Our 3D detailization via conditional
refinement is realized by a generative adversarial network, coined DECOR-GAN.
The network utilizes a 3D CNN generator for upsampling coarse voxels and a 3D
PatchGAN discriminator to enforce local patches of the generated model to be
similar to those in the training detailed shapes. During testing, a style code
is fed into the generator to condition the refinement. We demonstrate that our
method can refine a coarse shape into a variety of detailed shapes with
different styles. The generated results are evaluated in terms of content
preservation, plausibility, and diversity. Comprehensive ablation studies are
conducted to validate our network designs. Code is available at
https://github.com/czq142857/DECOR-GAN.
Related papers
- DECOLLAGE: 3D Detailization by Controllable, Localized, and Learned Geometry Enhancement [38.719572669042925]
We present a 3D modeling method which enables end-users to refine or detailize 3D shapes using machine learning.
We show that our ability to localize details enables novel interactive creative and applications.
arXiv Detail & Related papers (2024-09-10T00:51:49Z) - NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - Robust 3D Tracking with Quality-Aware Shape Completion [67.9748164949519]
We propose a synthetic target representation composed of dense and complete point clouds depicting the target shape precisely by shape completion for robust 3D tracking.
Specifically, we design a voxelized 3D tracking framework with shape completion, in which we propose a quality-aware shape completion mechanism to alleviate the adverse effect of noisy historical predictions.
arXiv Detail & Related papers (2023-12-17T04:50:24Z) - ShaDDR: Interactive Example-Based Geometry and Texture Generation via 3D
Shape Detailization and Differentiable Rendering [24.622120688131616]
ShaDDR is an example-based deep generative neural network which produces a high-resolution textured 3D shape.
Our method learns to detailize the geometry via multi-resolution voxel upsampling and generate textures on voxel surfaces.
The generated shape preserves the overall structure of the input coarse voxel model.
arXiv Detail & Related papers (2023-06-08T02:35:30Z) - Dual Octree Graph Networks for Learning Adaptive Volumetric Shape
Representations [21.59311861556396]
Our method encodes the volumetric field of a 3D shape with an adaptive feature volume organized by an octree.
An encoder-decoder network is designed to learn the adaptive feature volume based on the graph convolutions over the dual graph of octree nodes.
Our method effectively encodes shape details, enables fast 3D shape reconstruction, and exhibits good generality for modeling 3D shapes out of training categories.
arXiv Detail & Related papers (2022-05-05T17:56:34Z) - 3DStyleNet: Creating 3D Shapes with Geometric and Texture Style
Variations [81.45521258652734]
We propose a method to create plausible geometric and texture style variations of 3D objects.
Our method can create many novel stylized shapes, resulting in effortless 3D content creation and style-ware data augmentation.
arXiv Detail & Related papers (2021-08-30T02:28:31Z) - DmifNet:3D Shape Reconstruction Based on Dynamic Multi-Branch
Information Fusion [14.585272577456472]
3D object reconstruction from a single-view image is a long-standing challenging problem.
Previous work was difficult to accurately reconstruct 3D shapes with a complex topology which has rich details at the edges and corners.
We propose a Dynamic Multi-branch Information Fusion Network (DmifNet) which can recover a high-fidelity 3D shape of arbitrary topology from a 2D image.
arXiv Detail & Related papers (2020-11-21T11:31:27Z) - RISA-Net: Rotation-Invariant Structure-Aware Network for Fine-Grained 3D
Shape Retrieval [46.02391761751015]
Fine-grained 3D shape retrieval aims to retrieve 3D shapes similar to a query shape in a repository with models belonging to the same class.
We introduce a novel deep architecture, RISA-Net, which learns rotation invariant 3D shape descriptors.
Our method is able to learn the importance of geometric and structural information of all the parts when generating the final compact latent feature of a 3D shape.
arXiv Detail & Related papers (2020-10-02T13:06:12Z) - DSG-Net: Learning Disentangled Structure and Geometry for 3D Shape
Generation [98.96086261213578]
We introduce DSG-Net, a deep neural network that learns a disentangled structured and geometric mesh representation for 3D shapes.
This supports a range of novel shape generation applications with disentangled control, such as of structure (geometry) while keeping geometry (structure) unchanged.
Our method not only supports controllable generation applications but also produces high-quality synthesized shapes, outperforming state-of-the-art methods.
arXiv Detail & Related papers (2020-08-12T17:06:51Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.