Modality Dropout for Improved Performance-driven Talking Faces
- URL: http://arxiv.org/abs/2005.13616v1
- Date: Wed, 27 May 2020 19:55:33 GMT
- Title: Modality Dropout for Improved Performance-driven Talking Faces
- Authors: Ahmed Hussen Abdelaziz and Barry-John Theobald and Paul Dixon and
Reinhard Knothe and Nicholas Apostoloff and Sachin Kajareker
- Abstract summary: We describe our novel deep learning approach for driving animated faces using both acoustic and visual information.
We use subjective testing to demonstrate: 1) the improvement of audiovisual-driven animation over the equivalent video-only approach, and 2) the improvement in the animation of speech-related facial movements after introducing modality dropout.
- Score: 5.6856010789797296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We describe our novel deep learning approach for driving animated faces using
both acoustic and visual information. In particular, speech-related facial
movements are generated using audiovisual information, and non-speech facial
movements are generated using only visual information. To ensure that our model
exploits both modalities during training, batches are generated that contain
audio-only, video-only, and audiovisual input features. The probability of
dropping a modality allows control over the degree to which the model exploits
audio and visual information during training. Our trained model runs in
real-time on resource limited hardware (e.g.\ a smart phone), it is user
agnostic, and it is not dependent on a potentially error-prone transcription of
the speech. We use subjective testing to demonstrate: 1) the improvement of
audiovisual-driven animation over the equivalent video-only approach, and 2)
the improvement in the animation of speech-related facial movements after
introducing modality dropout. Before introducing dropout, viewers prefer
audiovisual-driven animation in 51% of the test sequences compared with only
18% for video-driven. After introducing dropout viewer preference for
audiovisual-driven animation increases to 74%, but decreases to 8% for
video-only.
Related papers
- Multimodal Input Aids a Bayesian Model of Phonetic Learning [0.6827423171182154]
We introduce a method for creating high-quality synthetic videos of speakers' faces for an existing audio corpus.
Our learning model, when both trained and tested on audiovisual inputs, achieves up to a 8.1% relative improvement on a phoneme discrimination battery.
Visual information is especially beneficial in noisy audio environments.
arXiv Detail & Related papers (2024-07-22T19:00:11Z) - Exploring the Role of Audio in Video Captioning [59.679122191706426]
We present an audio-visual framework, which aims to fully exploit the potential of the audio modality for captioning.
We propose new local-global fusion mechanisms to improve information exchange across audio and video.
arXiv Detail & Related papers (2023-06-21T20:54:52Z) - FaceXHuBERT: Text-less Speech-driven E(X)pressive 3D Facial Animation
Synthesis Using Self-Supervised Speech Representation Learning [0.0]
FaceXHuBERT is a text-less speech-driven 3D facial animation generation method.
It is very robust to background noise and can handle audio recorded in a variety of situations.
It produces superior results with respect to the realism of the animation 78% of the time.
arXiv Detail & Related papers (2023-03-09T17:05:19Z) - Audio-Visual Speech Codecs: Rethinking Audio-Visual Speech Enhancement
by Re-Synthesis [67.73554826428762]
We propose a novel audio-visual speech enhancement framework for high-fidelity telecommunications in AR/VR.
Our approach leverages audio-visual speech cues to generate the codes of a neural speech, enabling efficient synthesis of clean, realistic speech from noisy signals.
arXiv Detail & Related papers (2022-03-31T17:57:10Z) - MeshTalk: 3D Face Animation from Speech using Cross-Modality
Disentanglement [142.9900055577252]
We propose a generic audio-driven facial animation approach that achieves highly realistic motion synthesis results for the entire face.
Our approach ensures highly accurate lip motion, while also plausible animation of the parts of the face that are uncorrelated to the audio signal, such as eye blinks and eye brow motion.
arXiv Detail & Related papers (2021-04-16T17:05:40Z) - Learning to Predict Salient Faces: A Novel Visual-Audio Saliency Model [96.24038430433885]
We propose a novel multi-modal video saliency model consisting of three branches: visual, audio and face.
Experimental results show that the proposed method outperforms 11 state-of-the-art saliency prediction works.
arXiv Detail & Related papers (2021-03-29T09:09:39Z) - Learning Speech Representations from Raw Audio by Joint Audiovisual
Self-Supervision [63.564385139097624]
We propose a method to learn self-supervised speech representations from the raw audio waveform.
We train a raw audio encoder by combining audio-only self-supervision (by predicting informative audio attributes) with visual self-supervision (by generating talking faces from audio)
Our results demonstrate the potential of multimodal self-supervision in audiovisual speech for learning good audio representations.
arXiv Detail & Related papers (2020-07-08T14:07:06Z) - Visually Guided Self Supervised Learning of Speech Representations [62.23736312957182]
We propose a framework for learning audio representations guided by the visual modality in the context of audiovisual speech.
We employ a generative audio-to-video training scheme in which we animate a still image corresponding to a given audio clip and optimize the generated video to be as close as possible to the real video of the speech segment.
We achieve state of the art results for emotion recognition and competitive results for speech recognition.
arXiv Detail & Related papers (2020-01-13T14:53:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.