Semi-supervised Synthesis of High-Resolution Editable Textures for 3D
Humans
- URL: http://arxiv.org/abs/2103.17266v1
- Date: Wed, 31 Mar 2021 17:58:34 GMT
- Title: Semi-supervised Synthesis of High-Resolution Editable Textures for 3D
Humans
- Authors: Bindita Chaudhuri, Nikolaos Sarafianos, Linda Shapiro, Tony Tung
- Abstract summary: We introduce a novel approach to generate diverse high fidelity texture maps for 3D human meshes in a semi-supervised setup.
Given a segmentation mask defining the layout of the semantic regions in the texture map, our network generates high-resolution textures with a variety of styles, that are then used for rendering purposes.
- Score: 14.098628848491147
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel approach to generate diverse high fidelity texture maps
for 3D human meshes in a semi-supervised setup. Given a segmentation mask
defining the layout of the semantic regions in the texture map, our network
generates high-resolution textures with a variety of styles, that are then used
for rendering purposes. To accomplish this task, we propose a Region-adaptive
Adversarial Variational AutoEncoder (ReAVAE) that learns the probability
distribution of the style of each region individually so that the style of the
generated texture can be controlled by sampling from the region-specific
distributions. In addition, we introduce a data generation technique to augment
our training set with data lifted from single-view RGB inputs. Our training
strategy allows the mixing of reference image styles with arbitrary styles for
different regions, a property which can be valuable for virtual try-on AR/VR
applications. Experimental results show that our method synthesizes better
texture maps compared to prior work while enabling independent layout and style
controllability.
Related papers
- DreamPolish: Domain Score Distillation With Progressive Geometry Generation [66.94803919328815]
We introduce DreamPolish, a text-to-3D generation model that excels in producing refined geometry and high-quality textures.
In the geometry construction phase, our approach leverages multiple neural representations to enhance the stability of the synthesis process.
In the texture generation phase, we introduce a novel score distillation objective, namely domain score distillation (DSD), to guide neural representations toward such a domain.
arXiv Detail & Related papers (2024-11-03T15:15:01Z) - TexGen: Text-Guided 3D Texture Generation with Multi-view Sampling and Resampling [37.67373829836975]
We present TexGen, a novel multi-view sampling and resampling framework for texture generation.
Our proposed method produces significantly better texture quality for diverse 3D objects with a high degree of view consistency.
Our proposed texture generation technique can also be applied to texture editing while preserving the original identity.
arXiv Detail & Related papers (2024-08-02T14:24:40Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - GenesisTex: Adapting Image Denoising Diffusion to Texture Space [15.907134430301133]
GenesisTex is a novel method for synthesizing textures for 3D geometries from text descriptions.
We maintain a latent texture map for each viewpoint, which is updated with predicted noise on the rendering of the corresponding viewpoint.
Global consistency is achieved through the integration of style consistency mechanisms within the noise prediction network.
arXiv Detail & Related papers (2024-03-26T15:15:15Z) - ConTex-Human: Free-View Rendering of Human from a Single Image with
Texture-Consistent Synthesis [49.28239918969784]
We introduce a texture-consistent back view synthesis module that could transfer the reference image content to the back view.
We also propose a visibility-aware patch consistency regularization for texture mapping and refinement combined with the synthesized back view texture.
arXiv Detail & Related papers (2023-11-28T13:55:53Z) - 3D Paintbrush: Local Stylization of 3D Shapes with Cascaded Score
Distillation [21.703142822709466]
3D Paintbrush is a technique for automatically local semantic regions on meshes via text descriptions.
Our method is designed to operate directly on meshes, producing texture maps seamlessly.
arXiv Detail & Related papers (2023-11-16T05:13:44Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - TMO: Textured Mesh Acquisition of Objects with a Mobile Device by using
Differentiable Rendering [54.35405028643051]
We present a new pipeline for acquiring a textured mesh in the wild with a single smartphone.
Our method first introduces an RGBD-aided structure from motion, which can yield filtered depth maps.
We adopt the neural implicit surface reconstruction method, which allows for high-quality mesh.
arXiv Detail & Related papers (2023-03-27T10:07:52Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Vehicle Reconstruction and Texture Estimation Using Deep Implicit
Semantic Template Mapping [32.580904361799966]
We introduce VERTEX, an effective solution to recover 3D shape and intrinsic texture of vehicles from uncalibrated monocular input.
By fusing the global and local features together, our approach is capable to generate consistent and detailed texture in both visible and invisible areas.
arXiv Detail & Related papers (2020-11-30T09:27:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.