A Keypoint Based Enhancement Method for Audio Driven Free View Talking
Head Synthesis
- URL: http://arxiv.org/abs/2210.03335v1
- Date: Fri, 7 Oct 2022 05:44:10 GMT
- Title: A Keypoint Based Enhancement Method for Audio Driven Free View Talking
Head Synthesis
- Authors: Yichen Han, Ya Li, Yingming Gao, Jinlong Xue, Songpo Wang, Lei Yang
- Abstract summary: Keypoint Based Enhancement (KPBE) method is proposed for audio driven free view talking head synthesis.
Experiments show that our proposed enhancement method improved the quality of talking-head videos in terms of mean opinion score.
- Score: 14.303621416852602
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Audio driven talking head synthesis is a challenging task that attracts
increasing attention in recent years. Although existing methods based on 2D
landmarks or 3D face models can synthesize accurate lip synchronization and
rhythmic head pose for arbitrary identity, they still have limitations, such as
the cut feeling in the mouth mapping and the lack of skin highlights. The
morphed region is blurry compared to the surrounding face. A Keypoint Based
Enhancement (KPBE) method is proposed for audio driven free view talking head
synthesis to improve the naturalness of the generated video. Firstly, existing
methods were used as the backend to synthesize intermediate results. Then we
used keypoint decomposition to extract video synthesis controlling parameters
from the backend output and the source image. After that, the controlling
parameters were composited to the source keypoints and the driving keypoints. A
motion field based method was used to generate the final image from the
keypoint representation. With keypoint representation, we overcame the cut
feeling in the mouth mapping and the lack of skin highlights. Experiments show
that our proposed enhancement method improved the quality of talking-head
videos in terms of mean opinion score.
Related papers
- KMTalk: Speech-Driven 3D Facial Animation with Key Motion Embedding [19.15471840100407]
We present a novel approach for synthesizing 3D facial motions from audio sequences using key motion embeddings.
Our method integrates linguistic and data-driven priors through two modules: the linguistic-based key motion acquisition and the cross-modal motion completion.
The latter extends key motions into a full sequence of 3D talking faces guided by audio features, improving temporal coherence and audio-visual consistency.
arXiv Detail & Related papers (2024-09-02T09:41:24Z) - High-fidelity and Lip-synced Talking Face Synthesis via Landmark-based Diffusion Model [89.29655924125461]
We propose a novel landmark-based diffusion model for talking face generation.
We first establish the less ambiguous mapping from audio to landmark motion of lip and jaw.
Then, we introduce an innovative conditioning module called TalkFormer to align the synthesized motion with the motion represented by landmarks.
arXiv Detail & Related papers (2024-08-10T02:58:28Z) - Speech2UnifiedExpressions: Synchronous Synthesis of Co-Speech Affective Face and Body Expressions from Affordable Inputs [67.27840327499625]
We present a multimodal learning-based method to simultaneously synthesize co-speech facial expressions and upper-body gestures for digital characters.
Our approach learns from sparse face landmarks and upper-body joints, estimated directly from video data, to generate plausible emotive character motions.
arXiv Detail & Related papers (2024-06-26T04:53:11Z) - Controllable Talking Face Generation by Implicit Facial Keypoints Editing [6.036277153327655]
We present ControlTalk, a talking face generation method to control face expression deformation based on driven audio.
Our experiments show that our method is superior to state-of-the-art performance on widely used benchmarks, including HDTF and MEAD.
arXiv Detail & Related papers (2024-06-05T02:54:46Z) - Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a
Short Video [91.92782707888618]
We present a decomposition-composition framework named Speech to Lip (Speech2Lip) that disentangles speech-sensitive and speech-insensitive motion/appearance.
We show that our model can be trained by a video of just a few minutes in length and achieve state-of-the-art performance in both visual quality and speech-visual synchronization.
arXiv Detail & Related papers (2023-09-09T14:52:39Z) - Pose-Controllable 3D Facial Animation Synthesis using Hierarchical
Audio-Vertex Attention [52.63080543011595]
A novel pose-controllable 3D facial animation synthesis method is proposed by utilizing hierarchical audio-vertex attention.
The proposed method can produce more realistic facial expressions and head posture movements.
arXiv Detail & Related papers (2023-02-24T09:36:31Z) - AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis [55.24336227884039]
We present a novel framework to generate high-fidelity talking head video.
We use neural scene representation networks to bridge the gap between audio input and video output.
Our framework can (1) produce high-fidelity and natural results, and (2) support free adjustment of audio signals, viewing directions, and background images.
arXiv Detail & Related papers (2021-03-20T02:58:13Z) - Facial Keypoint Sequence Generation from Audio [2.66512000865131]
This work proposes an audio-keypoint dataset and learns a model to output the plausible keypoint sequence to go with audio of any arbitrary length.
It is the first work that proposes an audio-keypoint dataset and learns a model to output the plausible keypoint sequence to go with audio of any arbitrary length.
arXiv Detail & Related papers (2020-11-02T16:47:52Z) - Everybody's Talkin': Let Me Talk as You Want [134.65914135774605]
We present a method to edit a target portrait footage by taking a sequence of audio as input to synthesize a photo-realistic video.
It does not assume a person-specific rendering network yet capable of translating arbitrary source audio into arbitrary video output.
arXiv Detail & Related papers (2020-01-15T09:54:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.