Expression Domain Translation Network for Cross-domain Head Reenactment
- URL: http://arxiv.org/abs/2310.10073v2
- Date: Mon, 6 Nov 2023 09:40:23 GMT
- Title: Expression Domain Translation Network for Cross-domain Head Reenactment
- Authors: Taewoong Kang, Jeongsik Oh, Jaeseong Lee, Sunghyun Park, Jaegul Choo
- Abstract summary: Cross-domain head reenactment aims to transfer human motions to domains outside the human, including cartoon characters.
Previous work introduced a large-scale anime dataset called AnimeCeleb and a cross-domain head reenactment model.
We introduce a novel expression domain translation network that transforms human expressions into anime expressions.
- Score: 35.42539568449744
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the remarkable advancements in head reenactment, the existing methods
face challenges in cross-domain head reenactment, which aims to transfer human
motions to domains outside the human, including cartoon characters. It is still
difficult to extract motion from out-of-domain images due to the distinct
appearances, such as large eyes. Recently, previous work introduced a
large-scale anime dataset called AnimeCeleb and a cross-domain head reenactment
model, including an optimization-based mapping function to translate the human
domain's expressions to the anime domain. However, we found that the mapping
function, which relies on a subset of expressions, imposes limitations on the
mapping of various expressions. To solve this challenge, we introduce a novel
expression domain translation network that transforms human expressions into
anime expressions. Specifically, to maintain the geometric consistency of
expressions between the input and output of the expression domain translation
network, we employ a 3D geometric-aware loss function that reduces the
distances between the vertices in the 3D mesh of the human and anime. By doing
so, it forces high-fidelity and one-to-one mapping with respect to two
cross-expression domains. Our method outperforms existing methods in both
qualitative and quantitative analysis, marking a significant advancement in the
field of cross-domain head reenactment.
Related papers
- FreeAvatar: Robust 3D Facial Animation Transfer by Learning an Expression Foundation Model [45.0201701977516]
Video-driven 3D facial animation transfer aims to drive avatars to reproduce the expressions of actors.
We propose FreeAvatar, a robust facial animation transfer method that relies solely on our learned expression representation.
arXiv Detail & Related papers (2024-09-20T03:17:01Z) - Cross-domain and Cross-dimension Learning for Image-to-Graph
Transformers [50.576354045312115]
Direct image-to-graph transformation is a challenging task that solves object detection and relationship prediction in a single model.
We introduce a set of methods enabling cross-domain and cross-dimension transfer learning for image-to-graph transformers.
We demonstrate our method's utility in cross-domain and cross-dimension experiments, where we pretrain our models on 2D satellite images before applying them to vastly different target domains in 2D and 3D.
arXiv Detail & Related papers (2024-03-11T10:48:56Z) - ToonTalker: Cross-Domain Face Reenactment [80.52472147553333]
Cross-domain face reenactment involves driving a cartoon image with the video of a real person and vice versa.
Recently, many works have focused on one-shot talking face generation to drive a portrait with a real video.
We propose a transformer-based framework to align the motions from different domains into a common latent space.
arXiv Detail & Related papers (2023-08-24T15:43:14Z) - Facial Expression Translation using Landmark Guided GANs [84.64650795005649]
We propose a powerful Landmark guided Generative Adversarial Network (LandmarkGAN) for the facial expression-to-expression translation.
The proposed LandmarkGAN achieves better results compared with state-of-the-art approaches only using a single image.
arXiv Detail & Related papers (2022-09-05T20:52:42Z) - Unaligned Image-to-Image Translation by Learning to Reweight [40.93678165567824]
Unsupervised image-to-image translation aims at learning the mapping from the source to target domain without using paired images for training.
An essential yet restrictive assumption for unsupervised image translation is that the two domains are aligned.
We propose to select images based on importance reweighting and develop a method to learn the weights and perform translation simultaneously and automatically.
arXiv Detail & Related papers (2021-09-24T04:08:22Z) - Cross-Domain and Disentangled Face Manipulation with 3D Guidance [33.43993665841577]
We propose the first method to manipulate faces in arbitrary domains using human 3DMM.
This is achieved through two major steps: 1) disentangled mapping from 3DMM parameters to the latent space embedding of a pre-trained StyleGAN2.
Experiments and comparisons demonstrate the superiority of our high-quality semantic manipulation method on a variety of face domains.
arXiv Detail & Related papers (2021-04-22T17:59:50Z) - Everything's Talkin': Pareidolia Face Reenactment [119.49707201178633]
Pareidolia Face Reenactment is defined as animating a static illusory face to move in tandem with a human face in the video.
For the large differences between pareidolia face reenactment and traditional human face reenactment, shape variance and texture variance are introduced.
We propose a novel Parametric Unsupervised Reenactment Algorithm to tackle these two challenges.
arXiv Detail & Related papers (2021-04-07T11:19:13Z) - Facial Expression Retargeting from Human to Avatar Made Easy [34.86394328702422]
Facial expression from humans to virtual characters is a useful technique in computer graphics and animation.
Traditional methods use markers or blendshapes to construct a mapping between the human and avatar faces.
We propose a brand-new solution to this cross-domain expression transfer problem via nonlinear expression embedding and expression domain translation.
arXiv Detail & Related papers (2020-08-12T04:55:54Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.