Efficient 3D Articulated Human Generation with Layered Surface Volumes
- URL: http://arxiv.org/abs/2307.05462v1
- Date: Tue, 11 Jul 2023 17:50:02 GMT
- Title: Efficient 3D Articulated Human Generation with Layered Surface Volumes
- Authors: Yinghao Xu, Wang Yifan, Alexander W. Bergman, Menglei Chai, Bolei
Zhou, Gordon Wetzstein
- Abstract summary: We introduce layered surface volumes (LSVs) as a new 3D object representation for articulated digital humans.
LSVs represent a human body using multiple textured layers around a conventional template.
They exhibit exceptional efficiency in GAN settings, where a 2D generator learns to synthesize the RGBA textures for the individual layers.
- Score: 131.3802971483426
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Access to high-quality and diverse 3D articulated digital human assets is
crucial in various applications, ranging from virtual reality to social
platforms. Generative approaches, such as 3D generative adversarial networks
(GANs), are rapidly replacing laborious manual content creation tools. However,
existing 3D GAN frameworks typically rely on scene representations that
leverage either template meshes, which are fast but offer limited quality, or
volumes, which offer high capacity but are slow to render, thereby limiting the
3D fidelity in GAN settings. In this work, we introduce layered surface volumes
(LSVs) as a new 3D object representation for articulated digital humans. LSVs
represent a human body using multiple textured mesh layers around a
conventional template. These layers are rendered using alpha compositing with
fast differentiable rasterization, and they can be interpreted as a volumetric
representation that allocates its capacity to a manifold of finite thickness
around the template. Unlike conventional single-layer templates that struggle
with representing fine off-surface details like hair or accessories, our
surface volumes naturally capture such details. LSVs can be articulated, and
they exhibit exceptional efficiency in GAN settings, where a 2D generator
learns to synthesize the RGBA textures for the individual layers. Trained on
unstructured, single-view 2D image datasets, our LSV-GAN generates high-quality
and view-consistent 3D articulated digital humans without the need for
view-inconsistent 2D upsampling networks.
Related papers
- StdGEN: Semantic-Decomposed 3D Character Generation from Single Images [28.302030751098354]
StdGEN is an innovative pipeline for generating semantically high-quality 3D characters from single images.
It generates intricately detailed 3D characters with separated semantic components such as the body, clothes, and hair, in three minutes.
StdGEN offers ready-to-use semantic-decomposed 3D characters and enables flexible customization for a wide range of applications.
arXiv Detail & Related papers (2024-11-08T17:54:18Z) - CLAY: A Controllable Large-scale Generative Model for Creating High-quality 3D Assets [43.315487682462845]
CLAY is a 3D geometry and material generator designed to transform human imagination into intricate 3D digital structures.
At its core is a large-scale generative model composed of a multi-resolution Variational Autoencoder (VAE) and a minimalistic latent Diffusion Transformer (DiT)
We demonstrate using CLAY for a range of controllable 3D asset creations, from sketchy conceptual designs to production ready assets with intricate details.
arXiv Detail & Related papers (2024-05-30T05:57:36Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data [36.51674664590734]
We present En3D, an enhanced izable scheme for high-qualityd 3D human avatars.
Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalance viewing angles and pose priors, our approach aims to develop a zero-shot 3D capable of producing 3D humans.
arXiv Detail & Related papers (2024-01-02T12:06:31Z) - Gaussian Shell Maps for Efficient 3D Human Generation [96.25056237689988]
3D generative adversarial networks (GANs) have demonstrated state-of-the-art (SOTA) quality and diversity for generated assets.
Current 3D GAN architectures, however, rely on volume representations, which are slow to render, thereby hampering the GAN training and requiring multi-view-inconsistent 2D upsamplers.
arXiv Detail & Related papers (2023-11-29T18:04:07Z) - GETAvatar: Generative Textured Meshes for Animatable Human Avatars [69.56959932421057]
We study the problem of 3D-aware full-body human generation, aiming at creating animatable human avatars with high-quality geometries and textures.
We propose GETAvatar, a Generative model that directly generates Explicit Textured 3D rendering for animatable human Avatar.
arXiv Detail & Related papers (2023-10-04T10:30:24Z) - GET3D: A Generative Model of High Quality 3D Textured Shapes Learned
from Images [72.15855070133425]
We introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high-fidelity textures.
GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings.
arXiv Detail & Related papers (2022-09-22T17:16:19Z) - Towards Realistic 3D Embedding via View Alignment [53.89445873577063]
This paper presents an innovative View Alignment GAN (VA-GAN) that composes new images by embedding 3D models into 2D background images realistically and automatically.
VA-GAN consists of a texture generator and a differential discriminator that are inter-connected and end-to-end trainable.
arXiv Detail & Related papers (2020-07-14T14:45:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.