SMPLpix: Neural Avatars from 3D Human Models
- URL: http://arxiv.org/abs/2008.06872v2
- Date: Mon, 9 Nov 2020 11:08:09 GMT
- Title: SMPLpix: Neural Avatars from 3D Human Models
- Authors: Sergey Prokudin, Michael J. Black, Javier Romero
- Abstract summary: We bridge the gap between classic rendering and the latest generative networks operating in pixel space.
We train a network that directly converts a sparse set of 3D mesh vertices into photorealistic images.
We show the advantage over conventional differentiables both in terms of the level of photorealism and rendering efficiency.
- Score: 56.85115800735619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in deep generative models have led to an unprecedented level
of realism for synthetically generated images of humans. However, one of the
remaining fundamental limitations of these models is the ability to flexibly
control the generative process, e.g.~change the camera and human pose while
retaining the subject identity. At the same time, deformable human body models
like SMPL and its successors provide full control over pose and shape but rely
on classic computer graphics pipelines for rendering. Such rendering pipelines
require explicit mesh rasterization that (a) does not have the potential to fix
artifacts or lack of realism in the original 3D geometry and (b) until
recently, were not fully incorporated into deep learning frameworks. In this
work, we propose to bridge the gap between classic geometry-based rendering and
the latest generative networks operating in pixel space. We train a network
that directly converts a sparse set of 3D mesh vertices into photorealistic
images, alleviating the need for traditional rasterization mechanism. We train
our model on a large corpus of human 3D models and corresponding real photos,
and show the advantage over conventional differentiable renderers both in terms
of the level of photorealism and rendering efficiency.
Related papers
- GETAvatar: Generative Textured Meshes for Animatable Human Avatars [69.56959932421057]
We study the problem of 3D-aware full-body human generation, aiming at creating animatable human avatars with high-quality geometries and textures.
We propose GETAvatar, a Generative model that directly generates Explicit Textured 3D rendering for animatable human Avatar.
arXiv Detail & Related papers (2023-10-04T10:30:24Z) - Breathing New Life into 3D Assets with Generative Repainting [74.80184575267106]
Diffusion-based text-to-image models ignited immense attention from the vision community, artists, and content creators.
Recent works have proposed various pipelines powered by the entanglement of diffusion models and neural fields.
We explore the power of pretrained 2D diffusion models and standard 3D neural radiance fields as independent, standalone tools.
Our pipeline accepts any legacy renderable geometry, such as textured or untextured meshes, and orchestrates the interaction between 2D generative refinement and 3D consistency enforcement tools.
arXiv Detail & Related papers (2023-09-15T16:34:51Z) - VeRi3D: Generative Vertex-based Radiance Fields for 3D Controllable
Human Image Synthesis [27.81573705217842]
We propose VeRi3D, a generative human radiance field parameterized by vertices of the parametric human template, SMPL.
We demonstrate that our simple approach allows for generating photorealistic human images with free control over camera pose, human pose, shape, as well as enabling part-level editing.
arXiv Detail & Related papers (2023-09-09T13:53:29Z) - GET3D: A Generative Model of High Quality 3D Textured Shapes Learned
from Images [72.15855070133425]
We introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high-fidelity textures.
GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings.
arXiv Detail & Related papers (2022-09-22T17:16:19Z) - Texture Generation Using Graph Generative Adversarial Network And
Differentiable Rendering [0.6439285904756329]
Novel texture synthesis for existing 3D mesh models is an important step towards photo realistic asset generation for simulators.
Existing methods inherently work in the 2D image space which is the projection of the 3D space from a given camera perspective.
We present a new system called a graph generative adversarial network (GGAN) that can generate textures which can be directly integrated into a given 3D mesh models with tools like Blender and Unreal Engine.
arXiv Detail & Related papers (2022-06-17T04:56:03Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.