AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis
- URL: http://arxiv.org/abs/2204.03105v1
- Date: Wed, 6 Apr 2022 21:39:24 GMT
- Title: AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis
- Authors: Zhiqin Chen, Kangxue Yin, Sanja Fidler
- Abstract summary: We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
- Score: 78.17671694498185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we address the problem of texture representation for 3D shapes
for the challenging and underexplored tasks of texture transfer and synthesis.
Previous works either apply spherical texture maps which may lead to large
distortions, or use continuous texture fields that yield smooth outputs lacking
details. We argue that the traditional way of representing textures with images
and linking them to a 3D mesh via UV mapping is more desirable, since
synthesizing 2D images is a well-studied problem. We propose AUV-Net which
learns to embed 3D surfaces into a 2D aligned UV space, by mapping the
corresponding semantic parts of different 3D shapes to the same location in the
UV space. As a result, textures are aligned across objects, and can thus be
easily synthesized by generative models of images. Texture alignment is learned
in an unsupervised manner by a simple yet effective texture alignment module,
taking inspiration from traditional works on linear subspace learning. The
learned UV mapping and aligned texture representations enable a variety of
applications including texture transfer, texture synthesis, and textured single
view 3D reconstruction. We conduct experiments on multiple datasets to
demonstrate the effectiveness of our method. Project page:
https://nv-tlabs.github.io/AUV-NET.
Related papers
- Texture-GS: Disentangling the Geometry and Texture for 3D Gaussian Splatting Editing [79.10630153776759]
3D Gaussian splatting, emerging as a groundbreaking approach, has drawn increasing attention for its capabilities of high-fidelity reconstruction and real-time rendering.
We propose a novel approach, namely Texture-GS, to disentangle the appearance from the geometry by representing it as a 2D texture mapped onto the 3D surface.
Our method not only facilitates high-fidelity appearance editing but also achieves real-time rendering on consumer-level devices.
arXiv Detail & Related papers (2024-03-15T06:42:55Z) - Nuvo: Neural UV Mapping for Unruly 3D Representations [61.87715912587394]
Existing UV mapping algorithms operate on geometry produced by state-of-the-art 3D reconstruction and generation techniques.
We present a UV mapping method designed to operate on geometry produced by 3D reconstruction and generation techniques.
arXiv Detail & Related papers (2023-12-11T18:58:38Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - Texture Generation on 3D Meshes with Point-UV Diffusion [86.69672057856243]
We present Point-UV diffusion, a coarse-to-fine pipeline that marries the denoising diffusion model with UV mapping to generate high-quality texture images in UV space.
Our method can process meshes of any genus, generating diversified, geometry-compatible, and high-fidelity textures.
arXiv Detail & Related papers (2023-08-21T06:20:54Z) - TUVF: Learning Generalizable Texture UV Radiance Fields [32.417062841312976]
We introduce Texture UV Radiance Fields (TUVF) that generate textures in a learnable UV sphere space rather than directly on the 3D shape.
TUVF allows the texture to be disentangled from the underlying shape and transferable to other shapes that share the same UV space.
We perform our experiments on synthetic and real-world object datasets.
arXiv Detail & Related papers (2023-05-04T17:58:05Z) - TEGLO: High Fidelity Canonical Texture Mapping from Single-View Images [1.4502611532302039]
We propose TEGLO (Textured EG3D-GLO) for learning 3D representations from single view in-the-wild image collections.
We accomplish this by training a conditional Neural Radiance Field (NeRF) without any explicit 3D supervision.
We demonstrate that such mapping enables texture transfer and texture editing without requiring meshes with shared topology.
arXiv Detail & Related papers (2023-03-24T01:52:03Z) - FFHQ-UV: Normalized Facial UV-Texture Dataset for 3D Face Reconstruction [46.3392612457273]
This dataset contains over 50,000 high-quality texture UV-maps with even illuminations, neutral expressions, and cleaned facial regions.
Our pipeline utilizes the recent advances in StyleGAN-based facial image editing approaches.
Experiments show that our method improves the reconstruction accuracy over state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-25T03:21:05Z) - 3D Human Mesh Regression with Dense Correspondence [95.92326689172877]
Estimating 3D mesh of the human body from a single 2D image is an important task with many applications such as augmented reality and Human-Robot interaction.
Prior works reconstructed 3D mesh from global image feature extracted by using convolutional neural network (CNN), where the dense correspondences between the mesh surface and the image pixels are missing.
This paper proposes a model-free 3D human mesh estimation framework, named DecoMR, which explicitly establishes the dense correspondence between the mesh and the local image features in the UV space.
arXiv Detail & Related papers (2020-06-10T08:50:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.