Fast Bi-layer Neural Synthesis of One-Shot Realistic Head Avatars
- URL: http://arxiv.org/abs/2008.10174v1
- Date: Mon, 24 Aug 2020 03:23:59 GMT
- Title: Fast Bi-layer Neural Synthesis of One-Shot Realistic Head Avatars
- Authors: Egor Zakharov, Aleksei Ivakhnenko, Aliaksandra Shysheya, Victor
Lempitsky
- Abstract summary: We propose a neural rendering-based system that creates head avatars from a single photograph.
We compare our system to analogous state-of-the-art systems in terms of visual quality and speed.
- Score: 16.92378994798985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a neural rendering-based system that creates head avatars from a
single photograph. Our approach models a person's appearance by decomposing it
into two layers. The first layer is a pose-dependent coarse image that is
synthesized by a small neural network. The second layer is defined by a
pose-independent texture image that contains high-frequency details. The
texture image is generated offline, warped and added to the coarse image to
ensure a high effective resolution of synthesized head views. We compare our
system to analogous state-of-the-art systems in terms of visual quality and
speed. The experiments show significant inference speedup over previous neural
head avatar models for a given visual quality. We also report on a real-time
smartphone-based implementation of our system.
Related papers
- BakedAvatar: Baking Neural Fields for Real-Time Head Avatar Synthesis [7.485318043174123]
We introduce BakedAvatar, a novel representation for real-time neural head avatar.
Our approach extracts layered meshes from learned isosurfaces of the head and computes expression-, pose-, and view-dependent appearances.
Experimental results demonstrate that our representation generates photorealistic results of comparable quality to other state-the-art methods.
arXiv Detail & Related papers (2023-11-09T17:05:53Z) - OPHAvatars: One-shot Photo-realistic Head Avatars [0.0]
Given a portrait, our method synthesizes a coarse talking head video using driving keypoints features.
With rendered images of the coarse avatar, our method updates the low-quality images with a blind face restoration model.
After several iterations, our method can synthesize a photo-realistic animatable 3D neural head avatar.
arXiv Detail & Related papers (2023-07-18T11:24:42Z) - Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur [68.24599239479326]
We develop a hybrid neural rendering model that makes image-based representation and neural 3D representation join forces to render high-quality, view-consistent images.
Our model surpasses state-of-the-art point-based methods for novel view synthesis.
arXiv Detail & Related papers (2023-04-25T08:36:33Z) - Dynamic Neural Portraits [58.480811535222834]
We present Dynamic Neural Portraits, a novel approach to the problem of full-head reenactment.
Our method generates photo-realistic video portraits by explicitly controlling head pose, facial expressions and eye gaze.
Our experiments demonstrate that the proposed method is 270 times faster than recent NeRF-based reenactment methods.
arXiv Detail & Related papers (2022-11-25T10:06:14Z) - Free-HeadGAN: Neural Talking Head Synthesis with Explicit Gaze Control [54.079327030892244]
Free-HeadGAN is a person-generic neural talking head synthesis system.
We show that modeling faces with sparse 3D facial landmarks are sufficient for achieving state-of-the-art generative performance.
arXiv Detail & Related papers (2022-08-03T16:46:08Z) - MegaPortraits: One-shot Megapixel Neural Head Avatars [7.05068904295608]
We propose a set of new neural architectures and training methods that can leverage both medium-resolution video data and high-resolution image data.
We show how a trained high-resolution neural avatar model can be distilled into a lightweight student model which runs in real-time.
Real-time operation and identity lock are essential for many practical applications head avatar systems.
arXiv Detail & Related papers (2022-07-15T17:32:37Z) - Realistic One-shot Mesh-based Head Avatars [7.100064936484693]
We present a system for realistic one-shot mesh-based human head avatars creation, ROME for short.
Using a single photograph, our model estimates a person-specific head mesh and the associated neural texture, which encodes both local photometric and geometric details.
The resulting avatars are rigged and can be rendered using a neural network, which is trained alongside the mesh and texture estimators on a dataset of in-the-wild videos.
arXiv Detail & Related papers (2022-06-16T17:45:23Z) - HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars [65.82222842213577]
We propose a novel neural rendering pipeline, which synthesizes virtual human avatars from arbitrary poses efficiently and at high quality.
First, we learn to encode articulated human motions on a dense UV manifold of the human body surface.
We then leverage the encoded information on the UV manifold to construct a 3D volumetric representation.
arXiv Detail & Related papers (2021-12-19T17:34:15Z) - EgoRenderer: Rendering Human Avatars from Egocentric Camera Images [87.96474006263692]
We present EgoRenderer, a system for rendering full-body neural avatars of a person captured by a wearable, egocentric fisheye camera.
Rendering full-body avatars from such egocentric images come with unique challenges due to the top-down view and large distortions.
We tackle these challenges by decomposing the rendering process into several steps, including texture synthesis, pose construction, and neural image translation.
arXiv Detail & Related papers (2021-11-24T18:33:02Z) - Neural Human Video Rendering by Learning Dynamic Textures and
Rendering-to-Video Translation [99.64565200170897]
We propose a novel human video synthesis method by explicitly disentangling the learning of time-coherent fine-scale details from the embedding of the human in 2D screen space.
We show several applications of our approach, such as human reenactment and novel view synthesis from monocular video, where we show significant improvement over the state of the art both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-01-14T18:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.