UV Volumes for Real-time Rendering of Editable Free-view Human
Performance
- URL: http://arxiv.org/abs/2203.14402v1
- Date: Sun, 27 Mar 2022 21:54:36 GMT
- Title: UV Volumes for Real-time Rendering of Editable Free-view Human
Performance
- Authors: Yue Chen, Xuan Wang, Qi Zhang, Xiaoyu Li, Xingyu Chen, Yu Guo, Jue
Wang, Fei Wang
- Abstract summary: UV Volumes is an approach that can render an editable free-view video of a human performer in real-time.
It is achieved by removing the high-frequency (i.e., non-smooth) human textures from the 3D volume and encoding them into a 2D neural texture stack.
Experiments on CMU Panoptic, ZJU Mocap, and H36M datasets show that our model can render 900 * 500 images in 40 fps.
- Score: 35.089358945669865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural volume rendering has been proven to be a promising method for
efficient and photo-realistic rendering of a human performer in free-view, a
critical task in many immersive VR/AR applications. However, existing
approaches are severely limited by their high computational cost in the
rendering process. To solve this problem, we propose the UV Volumes, an
approach that can render an editable free-view video of a human performer in
real-time. It is achieved by removing the high-frequency (i.e., non-smooth)
human textures from the 3D volume and encoding them into a 2D neural texture
stack (NTS). The smooth UV volume allows us to employ a much smaller and
shallower structure for 3D CNN and MLP, to obtain the density and texture
coordinates without losing image details. Meanwhile, the NTS only needs to be
queried once for each pixel in the UV image to retrieve its RGB value. For
editability, the 3D CNN and MLP decoder can easily fit the function that maps
the input structured-and-posed latent codes to the relatively smooth densities
and texture coordinates. It gives our model a better generalization ability to
handle novel poses and shapes. Furthermore, the use of NST enables new
applications, e.g., retexturing. Extensive experiments on CMU Panoptic, ZJU
Mocap, and H36M datasets show that our model can render 900 * 500 images in 40
fps on average with comparable photorealism to state-of-the-art methods. The
project and supplementary materials are available at
https://fanegg.github.io/UV-Volumes.
Related papers
- TEXGen: a Generative Diffusion Model for Mesh Textures [63.43159148394021]
We focus on the fundamental problem of learning in the UV texture space itself.
We propose a scalable network architecture that interleaves convolutions on UV maps with attention layers on point clouds.
We train a 700 million parameter diffusion model that can generate UV texture maps guided by text prompts and single-view images.
arXiv Detail & Related papers (2024-11-22T05:22:11Z) - EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis [72.53316783628803]
We present Exact Volumetric Ellipsoid Rendering (EVER), a method for real-time differentiable emission-only volume rendering.
Unlike recentization based approach by 3D Gaussian Splatting (3DGS), our primitive based representation allows for exact volume rendering.
We show that our method is more accurate with blending issues than 3DGS and follow-up work on view rendering.
arXiv Detail & Related papers (2024-10-02T17:59:09Z) - TexDreamer: Towards Zero-Shot High-Fidelity 3D Human Texture Generation [41.959089177835764]
TexDreamer is the first zero-shot multimodal high-fidelity 3D human texture generation model.
We introduce ArTicuLated humAn textureS, the largest high-resolution (1024 X 1024) 3D human texture dataset.
arXiv Detail & Related papers (2024-03-19T17:02:07Z) - UV Gaussians: Joint Learning of Mesh Deformation and Gaussian Textures for Human Avatar Modeling [71.87807614875497]
We propose UV Gaussians, which models the 3D human body by jointly learning mesh deformations and 2D UV-space Gaussian textures.
We collect and process a new dataset of human motion, which includes multi-view images, scanned models, parametric model registration, and corresponding texture maps. Experimental results demonstrate that our method achieves state-of-the-art synthesis of novel view and novel pose.
arXiv Detail & Related papers (2024-03-18T09:03:56Z) - EvaSurf: Efficient View-Aware Implicit Textured Surface Reconstruction on Mobile Devices [53.28220984270622]
We present an implicit textured $textbfSurf$ace reconstruction method on mobile devices.
Our method can reconstruct high-quality appearance and accurate mesh on both synthetic and real-world datasets.
Our method can be trained in just 1-2 hours using a single GPU and run on mobile devices at over 40 FPS (Frames Per Second)
arXiv Detail & Related papers (2023-11-16T11:30:56Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - NeuVV: Neural Volumetric Videos with Immersive Rendering and Editing [34.40837543752915]
We present a neural volumography technique called neural volumetric video or NeuVV to support immersive, interactive, and spatial-temporal rendering.
NeuVV encodes a dynamic neural radiance field (NeRF) into renderable and editable primitives.
We further develop a hybrid neural-rasterization rendering framework to support consumer-level VR headsets.
arXiv Detail & Related papers (2022-02-12T15:23:16Z) - HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars [65.82222842213577]
We propose a novel neural rendering pipeline, which synthesizes virtual human avatars from arbitrary poses efficiently and at high quality.
First, we learn to encode articulated human motions on a dense UV manifold of the human body surface.
We then leverage the encoded information on the UV manifold to construct a 3D volumetric representation.
arXiv Detail & Related papers (2021-12-19T17:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.