IFaceUV: Intuitive Motion Facial Image Generation by Identity
Preservation via UV map
- URL: http://arxiv.org/abs/2306.04957v1
- Date: Thu, 8 Jun 2023 06:15:13 GMT
- Title: IFaceUV: Intuitive Motion Facial Image Generation by Identity
Preservation via UV map
- Authors: Hansol Lee, Yunhoe Ku, Eunseo Kim, Seungryul Baek
- Abstract summary: IFaceUV is a pipeline that properly combines 2D and 3D information to conduct the facial reenactment task.
The three-dimensional morphable face models (3DMMs) and corresponding UV maps are utilized to intuitively control facial motions and textures.
In our pipeline, we first extract 3DMM parameters and corresponding UV maps from source and target images.
In parallel, we warp the source image according to the 2D flow field obtained from the 2D warping network.
- Score: 5.397942823754509
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reenacting facial images is an important task that can find numerous
applications. We proposed IFaceUV, a fully differentiable pipeline that
properly combines 2D and 3D information to conduct the facial reenactment task.
The three-dimensional morphable face models (3DMMs) and corresponding UV maps
are utilized to intuitively control facial motions and textures, respectively.
Two-dimensional techniques based on 2D image warping is further required to
compensate for missing components of the 3DMMs such as backgrounds, ear, hair
and etc. In our pipeline, we first extract 3DMM parameters and corresponding UV
maps from source and target images. Then, initial UV maps are refined by the UV
map refinement network and it is rendered to the image with the motion
manipulated 3DMM parameters. In parallel, we warp the source image according to
the 2D flow field obtained from the 2D warping network. Rendered and warped
images are combined in the final editing network to generate the final
reenactment image. Additionally, we tested our model for the audio-driven
facial reenactment task. Extensive qualitative and quantitative experiments
illustrate the remarkable performance of our method compared to other
state-of-the-art methods.
Related papers
- TP3M: Transformer-based Pseudo 3D Image Matching with Reference Image [0.9831489366502301]
We propose a Transformer-based pseudo 3D image matching method.
It upgrades the 2D features extracted from the source image to 3D features with the help of a reference image and matches to the 2D features extracted from the destination image.
Experimental results on multiple datasets show that the proposed method achieves the state-of-the-art on the tasks of homography estimation, pose estimation and visual localization.
arXiv Detail & Related papers (2024-05-14T08:56:09Z) - Nuvo: Neural UV Mapping for Unruly 3D Representations [61.87715912587394]
Existing UV mapping algorithms operate on geometry produced by state-of-the-art 3D reconstruction and generation techniques.
We present a UV mapping method designed to operate on geometry produced by 3D reconstruction and generation techniques.
arXiv Detail & Related papers (2023-12-11T18:58:38Z) - Magic123: One Image to High-Quality 3D Object Generation Using Both 2D
and 3D Diffusion Priors [104.79392615848109]
We present Magic123, a two-stage coarse-to-fine approach for high-quality, textured 3D meshes from a single unposed image.
In the first stage, we optimize a neural radiance field to produce a coarse geometry.
In the second stage, we adopt a memory-efficient differentiable mesh representation to yield a high-resolution mesh with a visually appealing texture.
arXiv Detail & Related papers (2023-06-30T17:59:08Z) - Controllable Face Manipulation and UV Map Generation by Self-supervised
Learning [20.10160338724354]
Recent methods achieve explicit control over 2D images by combining 2D generative model and 3DMM.
Due to the lack of realism and clarity in texture reconstruction by 3DMM, there is a domain gap between the synthetic image and the rendered image of 3DMM.
In this study, we propose to explicitly edit the latent space of the pretrained StyleGAN by controlling the parameters of the 3DMM.
arXiv Detail & Related papers (2022-09-24T16:49:25Z) - GAN2X: Non-Lambertian Inverse Rendering of Image GANs [85.76426471872855]
We present GAN2X, a new method for unsupervised inverse rendering that only uses unpaired images for training.
Unlike previous Shape-from-GAN approaches that mainly focus on 3D shapes, we take the first attempt to also recover non-Lambertian material properties by exploiting the pseudo paired data generated by a GAN.
Experiments demonstrate that GAN2X can accurately decompose 2D images to 3D shape, albedo, and specular properties for different object categories, and achieves the state-of-the-art performance for unsupervised single-view 3D face reconstruction.
arXiv Detail & Related papers (2022-06-18T16:58:49Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - Towards Realistic 3D Embedding via View Alignment [53.89445873577063]
This paper presents an innovative View Alignment GAN (VA-GAN) that composes new images by embedding 3D models into 2D background images realistically and automatically.
VA-GAN consists of a texture generator and a differential discriminator that are inter-connected and end-to-end trainable.
arXiv Detail & Related papers (2020-07-14T14:45:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.