NeuTex: Neural Texture Mapping for Volumetric Neural Rendering
- URL: http://arxiv.org/abs/2103.00762v1
- Date: Mon, 1 Mar 2021 05:34:51 GMT
- Title: NeuTex: Neural Texture Mapping for Volumetric Neural Rendering
- Authors: Fanbo Xiang, Zexiang Xu, Milo\v{s} Ha\v{s}an, Yannick Hold-Geoffroy,
Kalyan Sunkavalli, Hao Su
- Abstract summary: We present an approach that explicitly disentangles geometry--represented as a continuous 3D volume--from appearance--represented as a continuous 2D texture map.
We demonstrate that this representation can be reconstructed using only multi-view image supervision and generates high-quality rendering results.
- Score: 48.83181790635772
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has demonstrated that volumetric scene representations combined
with differentiable volume rendering can enable photo-realistic rendering for
challenging scenes that mesh reconstruction fails on. However, these methods
entangle geometry and appearance in a "black-box" volume that cannot be edited.
Instead, we present an approach that explicitly disentangles
geometry--represented as a continuous 3D volume--from appearance--represented
as a continuous 2D texture map. We achieve this by introducing a 3D-to-2D
texture mapping (or surface parameterization) network into volumetric
representations. We constrain this texture mapping network using an additional
2D-to-3D inverse mapping network and a novel cycle consistency loss to make 3D
surface points map to 2D texture points that map back to the original 3D
points. We demonstrate that this representation can be reconstructed using only
multi-view image supervision and generates high-quality rendering results. More
importantly, by separating geometry and texture, we allow users to edit
appearance by simply editing 2D texture maps.
Related papers
- TEGLO: High Fidelity Canonical Texture Mapping from Single-View Images [1.4502611532302039]
We propose TEGLO (Textured EG3D-GLO) for learning 3D representations from single view in-the-wild image collections.
We accomplish this by training a conditional Neural Radiance Field (NeRF) without any explicit 3D supervision.
We demonstrate that such mapping enables texture transfer and texture editing without requiring meshes with shared topology.
arXiv Detail & Related papers (2023-03-24T01:52:03Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion [115.82306502822412]
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing.
A corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing.
We study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures.
arXiv Detail & Related papers (2022-12-14T18:49:50Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - Pixel-Aligned Non-parametric Hand Mesh Reconstruction [16.62199923065314]
Non-parametric mesh reconstruction has recently shown significant progress in 3D hand and body applications.
In this paper, we seek to establish and exploit this mapping with a simple and compact architecture.
We propose an end-to-end pipeline for hand mesh recovery tasks which consists of three phases.
arXiv Detail & Related papers (2022-10-17T15:53:18Z) - Pruning-based Topology Refinement of 3D Mesh using a 2D Alpha Mask [6.103988053817792]
We present a method to refine the topology of any 3D mesh through a face-pruning strategy.
Our solution leverages a differentiable that renders each face as a 2D soft map.
Because our module is agnostic to the network that produces the 3D mesh, it can be easily plugged into any self-supervised image-based 3D reconstruction pipeline.
arXiv Detail & Related papers (2022-10-17T14:51:38Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - 3DBooSTeR: 3D Body Shape and Texture Recovery [76.91542440942189]
3DBooSTeR is a novel method to recover a textured 3D body mesh from a partial 3D scan.
The proposed approach decouples the shape and texture completion into two sequential tasks.
arXiv Detail & Related papers (2020-10-23T21:07:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.