Realistic One-shot Mesh-based Head Avatars
- URL: http://arxiv.org/abs/2206.08343v1
- Date: Thu, 16 Jun 2022 17:45:23 GMT
- Title: Realistic One-shot Mesh-based Head Avatars
- Authors: Taras Khakhulin, Vanessa Sklyarova, Victor Lempitsky, Egor Zakharov
- Abstract summary: We present a system for realistic one-shot mesh-based human head avatars creation, ROME for short.
Using a single photograph, our model estimates a person-specific head mesh and the associated neural texture, which encodes both local photometric and geometric details.
The resulting avatars are rigged and can be rendered using a neural network, which is trained alongside the mesh and texture estimators on a dataset of in-the-wild videos.
- Score: 7.100064936484693
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present a system for realistic one-shot mesh-based human head avatars
creation, ROME for short. Using a single photograph, our model estimates a
person-specific head mesh and the associated neural texture, which encodes both
local photometric and geometric details. The resulting avatars are rigged and
can be rendered using a neural network, which is trained alongside the mesh and
texture estimators on a dataset of in-the-wild videos. In the experiments, we
observe that our system performs competitively both in terms of head geometry
recovery and the quality of renders, especially for the cross-person
reenactment. See results https://samsunglabs.github.io/rome/
Related papers
- Generalizable and Animatable Gaussian Head Avatar [50.34788590904843]
We propose Generalizable and Animatable Gaussian head Avatar (GAGAvatar) for one-shot animatable head avatar reconstruction.
We generate the parameters of 3D Gaussians from a single image in a single forward pass.
Our method exhibits superior performance compared to previous methods in terms of reconstruction quality and expression accuracy.
arXiv Detail & Related papers (2024-10-10T14:29:00Z) - HQ3DAvatar: High Quality Controllable 3D Head Avatar [65.70885416855782]
This paper presents a novel approach to building highly photorealistic digital head avatars.
Our method learns a canonical space via an implicit function parameterized by a neural network.
At test time, our method is driven by a monocular RGB video.
arXiv Detail & Related papers (2023-03-25T13:56:33Z) - DINAR: Diffusion Inpainting of Neural Textures for One-Shot Human
Avatars [7.777410338143783]
We present an approach for creating realistic rigged fullbody avatars from single RGB images.
Our method uses neural textures combined with the SMPL-X body model to achieve photo-realistic quality of avatars.
In the experiments, our approach achieves state-of-the-art rendering quality and good generalization to new poses and viewpoints.
arXiv Detail & Related papers (2023-03-16T15:04:10Z) - RANA: Relightable Articulated Neural Avatars [83.60081895984634]
We propose RANA, a relightable and articulated neural avatar for the photorealistic synthesis of humans.
We present a novel framework to model humans while disentangling their geometry, texture, and also lighting environment from monocular RGB videos.
arXiv Detail & Related papers (2022-12-06T18:59:31Z) - Multi-NeuS: 3D Head Portraits from Single Image with Neural Implicit
Functions [70.04394678730968]
We present an approach for the reconstruction of 3D human heads from one or few views.
The underlying neural architecture is to learn the objects and to generalize the model.
Our model can fit novel heads on just a hundred videos or one-shot 3D scans.
arXiv Detail & Related papers (2022-09-07T21:09:24Z) - Novel View Synthesis for High-fidelity Headshot Scenes [5.33510552066148]
We find that NeRF can render new views while maintaining geometric consistency, but it does not properly maintain skin details, such as moles and pores.
We propose a method to use both NeRF and 3DMM to synthesize a high-fidelity novel view of a scene with a face.
arXiv Detail & Related papers (2022-05-31T08:14:15Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - Neural Head Avatars from Monocular RGB Videos [0.0]
We present a novel neural representation that explicitly models the surface geometry and appearance of an animatable human avatar.
Our representation can be learned from a monocular RGB portrait video that features a range of different expressions and views.
arXiv Detail & Related papers (2021-12-02T19:01:05Z) - EgoRenderer: Rendering Human Avatars from Egocentric Camera Images [87.96474006263692]
We present EgoRenderer, a system for rendering full-body neural avatars of a person captured by a wearable, egocentric fisheye camera.
Rendering full-body avatars from such egocentric images come with unique challenges due to the top-down view and large distortions.
We tackle these challenges by decomposing the rendering process into several steps, including texture synthesis, pose construction, and neural image translation.
arXiv Detail & Related papers (2021-11-24T18:33:02Z) - Fast Bi-layer Neural Synthesis of One-Shot Realistic Head Avatars [16.92378994798985]
We propose a neural rendering-based system that creates head avatars from a single photograph.
We compare our system to analogous state-of-the-art systems in terms of visual quality and speed.
arXiv Detail & Related papers (2020-08-24T03:23:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.