Talking Head Generation with Probabilistic Audio-to-Visual Diffusion
Priors
- URL: http://arxiv.org/abs/2212.04248v1
- Date: Wed, 7 Dec 2022 17:55:41 GMT
- Title: Talking Head Generation with Probabilistic Audio-to-Visual Diffusion
Priors
- Authors: Zhentao Yu, Zixin Yin, Deyu Zhou, Duomin Wang, Finn Wong, Baoyuan Wang
- Abstract summary: We introduce a simple and novel framework for one-shot audio-driven talking head generation.
We probabilistically sample all the holistic lip-irrelevant facial motions to semantically match the input audio.
Thanks to the probabilistic nature of the diffusion prior, one big advantage of our framework is it can synthesize diverse facial motion sequences.
- Score: 18.904856604045264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce a simple and novel framework for one-shot
audio-driven talking head generation. Unlike prior works that require
additional driving sources for controlled synthesis in a deterministic manner,
we instead probabilistically sample all the holistic lip-irrelevant facial
motions (i.e. pose, expression, blink, gaze, etc.) to semantically match the
input audio while still maintaining both the photo-realism of audio-lip
synchronization and the overall naturalness. This is achieved by our newly
proposed audio-to-visual diffusion prior trained on top of the mapping between
audio and disentangled non-lip facial representations. Thanks to the
probabilistic nature of the diffusion prior, one big advantage of our framework
is it can synthesize diverse facial motion sequences given the same audio clip,
which is quite user-friendly for many real applications. Through comprehensive
evaluations on public benchmarks, we conclude that (1) our diffusion prior
outperforms auto-regressive prior significantly on almost all the concerned
metrics; (2) our overall system is competitive with prior works in terms of
audio-lip synchronization but can effectively sample rich and natural-looking
lip-irrelevant facial motions while still semantically harmonized with the
audio input.
Related papers
- Sonic: Shifting Focus to Global Audio Perception in Portrait Animation [43.63279351897198]
The study of talking face generation mainly explores the intricacies of synchronizing facial movements and crafting visually appealing, temporally-coherent animations.
We propose a novel paradigm, dubbed as Sonic, to leverage global audio knowledge and enhance overall perception.
Extensive experiments demonstrate that the novel audio-driven paradigm outperform existing SOTA methodologies in terms of video quality, temporally consistency, lip synchronization precision, and motion diversity.
arXiv Detail & Related papers (2024-11-25T12:24:52Z) - S^3D-NeRF: Single-Shot Speech-Driven Neural Radiance Field for High Fidelity Talking Head Synthesis [14.437741528053504]
We design a Single-Shot Speech-Driven Radiance Field (S3D-NeRF) method to tackle the three difficulties: learning a representative appearance feature for each identity, modeling motion of different face regions with audio, and keeping the temporal consistency of the lip area.
Our S3D-NeRF surpasses previous arts on both video fidelity and audio-lip synchronization.
arXiv Detail & Related papers (2024-08-18T03:59:57Z) - High-fidelity and Lip-synced Talking Face Synthesis via Landmark-based Diffusion Model [89.29655924125461]
We propose a novel landmark-based diffusion model for talking face generation.
We first establish the less ambiguous mapping from audio to landmark motion of lip and jaw.
Then, we introduce an innovative conditioning module called TalkFormer to align the synthesized motion with the motion represented by landmarks.
arXiv Detail & Related papers (2024-08-10T02:58:28Z) - RealTalk: Real-time and Realistic Audio-driven Face Generation with 3D Facial Prior-guided Identity Alignment Network [48.95833484103569]
RealTalk is an audio-to-expression transformer and a high-fidelity expression-to-face framework.
In the first component, we consider both identity and intra-personal variation features related to speaking lip movements.
In the second component, we design a lightweight facial identity alignment (FIA) module.
This novel design allows us to generate fine details in real-time, without depending on sophisticated and inefficient feature alignment modules.
arXiv Detail & Related papers (2024-06-26T12:09:59Z) - DiffTalk: Crafting Diffusion Models for Generalized Audio-Driven
Portraits Animation [78.08004432704826]
We model the Talking head generation as an audio-driven temporally coherent denoising process (DiffTalk)
In this paper, we investigate the control mechanism of the talking face, and incorporate reference face images and landmarks as conditions for personality-aware generalized synthesis.
Our DiffTalk can be gracefully tailored for higher-resolution synthesis with negligible extra computational cost.
arXiv Detail & Related papers (2023-01-10T05:11:25Z) - Audio-Visual Speech Codecs: Rethinking Audio-Visual Speech Enhancement
by Re-Synthesis [67.73554826428762]
We propose a novel audio-visual speech enhancement framework for high-fidelity telecommunications in AR/VR.
Our approach leverages audio-visual speech cues to generate the codes of a neural speech, enabling efficient synthesis of clean, realistic speech from noisy signals.
arXiv Detail & Related papers (2022-03-31T17:57:10Z) - Towards Realistic Visual Dubbing with Heterogeneous Sources [22.250010330418398]
Few-shot visual dubbing involves synchronizing the lip movements with arbitrary speech input for any talking head.
We propose a simple yet efficient two-stage framework with a higher flexibility of mining heterogeneous data.
Our method makes it possible to independently utilize the training corpus for two-stage sub-networks.
arXiv Detail & Related papers (2022-01-17T07:57:24Z) - MeshTalk: 3D Face Animation from Speech using Cross-Modality
Disentanglement [142.9900055577252]
We propose a generic audio-driven facial animation approach that achieves highly realistic motion synthesis results for the entire face.
Our approach ensures highly accurate lip motion, while also plausible animation of the parts of the face that are uncorrelated to the audio signal, such as eye blinks and eye brow motion.
arXiv Detail & Related papers (2021-04-16T17:05:40Z) - VisualVoice: Audio-Visual Speech Separation with Cross-Modal Consistency [111.55430893354769]
Given a video, the goal is to extract the speech associated with a face in spite of simultaneous background sounds and/or other human speakers.
Our approach jointly learns audio-visual speech separation and cross-modal speaker embeddings from unlabeled video.
It yields state-of-the-art results on five benchmark datasets for audio-visual speech separation and enhancement.
arXiv Detail & Related papers (2021-01-08T18:25:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.