ShaDDR: Interactive Example-Based Geometry and Texture Generation via 3D
Shape Detailization and Differentiable Rendering
- URL: http://arxiv.org/abs/2306.04889v2
- Date: Wed, 22 Nov 2023 03:02:46 GMT
- Title: ShaDDR: Interactive Example-Based Geometry and Texture Generation via 3D
Shape Detailization and Differentiable Rendering
- Authors: Qimin Chen, Zhiqin Chen, Hang Zhou, Hao Zhang
- Abstract summary: ShaDDR is an example-based deep generative neural network which produces a high-resolution textured 3D shape.
Our method learns to detailize the geometry via multi-resolution voxel upsampling and generate textures on voxel surfaces.
The generated shape preserves the overall structure of the input coarse voxel model.
- Score: 24.622120688131616
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present ShaDDR, an example-based deep generative neural network which
produces a high-resolution textured 3D shape through geometry detailization and
conditional texture generation applied to an input coarse voxel shape. Trained
on a small set of detailed and textured exemplar shapes, our method learns to
detailize the geometry via multi-resolution voxel upsampling and generate
textures on voxel surfaces via differentiable rendering against exemplar
texture images from a few views. The generation is interactive, taking less
than 1 second to produce a 3D model with voxel resolutions up to 512^3. The
generated shape preserves the overall structure of the input coarse voxel
model, while the style of the generated geometric details and textures can be
manipulated through learned latent codes. In the experiments, we show that our
method can generate higher-resolution shapes with plausible and improved
geometric details and clean textures compared to prior works. Furthermore, we
showcase the ability of our method to learn geometric details and textures from
shapes reconstructed from real-world photos. In addition, we have developed an
interactive modeling application to demonstrate the generalizability of our
method to various user inputs and the controllability it offers, allowing users
to interactively sculpt a coarse voxel shape to define the overall structure of
the detailized 3D shape. Code and data are available at
https://github.com/qiminchen/ShaDDR.
Related papers
- Pandora3D: A Comprehensive Framework for High-Quality 3D Shape and Texture Generation [58.77520205498394]
This report presents a comprehensive framework for generating high-quality 3D shapes and textures from diverse input prompts.
The framework consists of 3D shape generation and texture generation.
This report details the system architecture, experimental results, and potential future directions to improve and expand the framework.
arXiv Detail & Related papers (2025-02-20T04:22:30Z) - Textured Mesh Saliency: Bridging Geometry and Texture for Human Perception in 3D Graphics [50.23625950905638]
We present a new dataset for textured mesh saliency, created through an innovative eye-tracking experiment in a six degrees of freedom (6-DOF) VR environment.
Our proposed model predicts saliency maps for textured mesh surfaces by treating each triangular face as an individual unit and assigning a saliency density value to reflect the importance of each local surface region.
arXiv Detail & Related papers (2024-12-11T08:27:33Z) - Tactile DreamFusion: Exploiting Tactile Sensing for 3D Generation [39.702921832009466]
We introduce a new method that incorporates touch as an additional modality to improve the geometric details of generated 3D assets.
We design a lightweight 3D texture field to synthesize visual and tactile textures, guided by 2D diffusion model priors.
We are the first to leverage high-resolution tactile sensing to enhance geometric details for 3D generation tasks.
arXiv Detail & Related papers (2024-12-09T18:59:45Z) - DECOLLAGE: 3D Detailization by Controllable, Localized, and Learned Geometry Enhancement [38.719572669042925]
We present a 3D modeling method which enables end-users to refine or detailize 3D shapes using machine learning.
We show that our ability to localize details enables novel interactive creative and applications.
arXiv Detail & Related papers (2024-09-10T00:51:49Z) - ShapeClipper: Scalable 3D Shape Learning from Single-View Images via
Geometric and CLIP-based Consistency [39.7058456335011]
We present ShapeClipper, a novel method that reconstructs 3D object shapes from real-world single-view RGB images.
ShapeClipper learns shape reconstruction from a set of single-view segmented images.
We evaluate our method over three challenging real-world datasets.
arXiv Detail & Related papers (2023-04-13T03:53:12Z) - DreamStone: Image as Stepping Stone for Text-Guided 3D Shape Generation [105.97545053660619]
We present a new text-guided 3D shape generation approach DreamStone.
It uses images as a stepping stone to bridge the gap between text and shape modalities for generating 3D shapes without requiring paired text and 3D data.
Our approach is generic, flexible, and scalable, and it can be easily integrated with various SVR models to expand the generative space and improve the generative fidelity.
arXiv Detail & Related papers (2023-03-24T03:56:23Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - DECOR-GAN: 3D Shape Detailization by Conditional Refinement [50.8801457082181]
We introduce a deep generative network for 3D shape detailization, akin to stylization with the style being geometric details.
We demonstrate that our method can refine a coarse shape into a variety of detailed shapes with different styles.
arXiv Detail & Related papers (2020-12-16T18:52:10Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.