Generating Texture for 3D Human Avatar from a Single Image using
Sampling and Refinement Networks
- URL: http://arxiv.org/abs/2305.00936v1
- Date: Mon, 1 May 2023 16:44:02 GMT
- Title: Generating Texture for 3D Human Avatar from a Single Image using
Sampling and Refinement Networks
- Authors: Sihun Cha, Kwanggyoon Seo, Amirsaman Ashtari, Junyong Noh
- Abstract summary: We propose a texture synthesis method for a 3D human avatar that incorporates geometry information.
A sampler network fills in the occluded regions of the source image and aligns the texture with the surface of the target 3D mesh.
To maintain the clear details in the given image, both sampled and refined texture is blended to produce the final texture map.
- Score: 8.659903550327442
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: There has been significant progress in generating an animatable 3D human
avatar from a single image. However, recovering texture for the 3D human avatar
from a single image has been relatively less addressed. Because the generated
3D human avatar reveals the occluded texture of the given image as it moves, it
is critical to synthesize the occluded texture pattern that is unseen from the
source image. To generate a plausible texture map for 3D human avatars, the
occluded texture pattern needs to be synthesized with respect to the visible
texture from the given image. Moreover, the generated texture should align with
the surface of the target 3D mesh. In this paper, we propose a texture
synthesis method for a 3D human avatar that incorporates geometry information.
The proposed method consists of two convolutional networks for the sampling and
refining process. The sampler network fills in the occluded regions of the
source image and aligns the texture with the surface of the target 3D mesh
using the geometry information. The sampled texture is further refined and
adjusted by the refiner network. To maintain the clear details in the given
image, both sampled and refined texture is blended to produce the final texture
map. To effectively guide the sampler network to achieve its goal, we designed
a curriculum learning scheme that starts from a simple sampling task and
gradually progresses to the task where the alignment needs to be considered. We
conducted experiments to show that our method outperforms previous methods
qualitatively and quantitatively.
Related papers
- TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - ConTex-Human: Free-View Rendering of Human from a Single Image with
Texture-Consistent Synthesis [49.28239918969784]
We introduce a texture-consistent back view synthesis module that could transfer the reference image content to the back view.
We also propose a visibility-aware patch consistency regularization for texture mapping and refinement combined with the synthesized back view texture.
arXiv Detail & Related papers (2023-11-28T13:55:53Z) - Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models [21.622420436349245]
We present Text2Room, a method for generating room-scale textured 3D meshes from a given text prompt as input.
We leverage pre-trained 2D text-to-image models to synthesize a sequence of images from different poses.
In order to lift these outputs into a consistent 3D scene representation, we combine monocular depth estimation with a text-conditioned inpainting model.
arXiv Detail & Related papers (2023-03-21T16:21:02Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - Projective Urban Texturing [8.349665441428925]
We propose a method for automatic generation of textures for 3D city meshes in immersive urban environments.
Projective Urban Texturing (PUT) re-targets textural style from real-world panoramic images to unseen urban meshes.
PUT relies on contrastive and adversarial training of a neural architecture designed for unpaired image-to-texture translation.
arXiv Detail & Related papers (2022-01-25T14:56:52Z) - Deep Hybrid Self-Prior for Full 3D Mesh Generation [57.78562932397173]
We propose to exploit a novel hybrid 2D-3D self-prior in deep neural networks to significantly improve the geometry quality.
In particular, we first generate an initial mesh using a 3D convolutional neural network with 3D self-prior, and then encode both 3D information and color information in the 2D UV atlas.
Our method recovers the 3D textured mesh model of high quality from sparse input, and outperforms the state-of-the-art methods in terms of both the geometry and texture quality.
arXiv Detail & Related papers (2021-08-18T07:44:21Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - 3DBooSTeR: 3D Body Shape and Texture Recovery [76.91542440942189]
3DBooSTeR is a novel method to recover a textured 3D body mesh from a partial 3D scan.
The proposed approach decouples the shape and texture completion into two sequential tasks.
arXiv Detail & Related papers (2020-10-23T21:07:59Z) - Novel-View Human Action Synthesis [39.72702883597454]
We present a novel 3D reasoning to synthesize the target viewpoint.
We first estimate the 3D mesh of the target body and transfer the rough textures from the 2D images to the mesh.
We produce a semi-dense textured mesh by propagating the transferred textures both locally, within local geodesic neighborhoods, and globally.
arXiv Detail & Related papers (2020-07-06T15:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.