Large-scale multilingual audio visual dubbing
- URL: http://arxiv.org/abs/2011.03530v1
- Date: Fri, 6 Nov 2020 18:58:15 GMT
- Title: Large-scale multilingual audio visual dubbing
- Authors: Yi Yang, Brendan Shillingford, Yannis Assael, Miaosen Wang, Wendi Liu,
Yutian Chen, Yu Zhang, Eren Sezener, Luis C. Cobo, Misha Denil, Yusuf Aytar,
Nando de Freitas
- Abstract summary: We describe a system for large-scale audiovisual translation and dubbing.
The source language's speech content is transcribed to text, translated, and automatically synthesized into target language speech.
The visual content is translated by synthesizing lip movements for the speaker to match the translated audio.
- Score: 31.43873011591989
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We describe a system for large-scale audiovisual translation and dubbing,
which translates videos from one language to another. The source language's
speech content is transcribed to text, translated, and automatically
synthesized into target language speech using the original speaker's voice. The
visual content is translated by synthesizing lip movements for the speaker to
match the translated audio, creating a seamless audiovisual experience in the
target language. The audio and visual translation subsystems each contain a
large-scale generic synthesis model trained on thousands of hours of data in
the corresponding domain. These generic models are fine-tuned to a specific
speaker before translation, either using an auxiliary corpus of data from the
target speaker, or using the video to be translated itself as the input to the
fine-tuning process. This report gives an architectural overview of the full
system, as well as an in-depth discussion of the video dubbing component. The
role of the audio and text components in relation to the full system is
outlined, but their design is not discussed in detail. Translated and dubbed
demo videos generated using our system can be viewed at
https://www.youtube.com/playlist?list=PLSi232j2ZA6_1Exhof5vndzyfbxAhhEs5
Related papers
- AV2AV: Direct Audio-Visual Speech to Audio-Visual Speech Translation with Unified Audio-Visual Speech Representation [58.72068260933836]
The input and output of the system are multimodal (i.e., audio and visual speech)
We can perform real-like conversations with individuals worldwide in a virtual meeting by utilizing our own primary languages.
In contrast to Speech-to-Speech Translation (A2A), which solely translates between audio modalities, the proposed AV2AV directly translates between audio-visual speech.
arXiv Detail & Related papers (2023-12-05T05:36:44Z) - AudioPaLM: A Large Language Model That Can Speak and Listen [79.44757696533709]
We introduce AudioPaLM, a large language model for speech understanding and generation.
AudioPaLM fuses text-based and speech-based language models.
It can process and generate text and speech with applications including speech recognition and speech-to-speech translation.
arXiv Detail & Related papers (2023-06-22T14:37:54Z) - Exploring the Role of Audio in Video Captioning [59.679122191706426]
We present an audio-visual framework, which aims to fully exploit the potential of the audio modality for captioning.
We propose new local-global fusion mechanisms to improve information exchange across audio and video.
arXiv Detail & Related papers (2023-06-21T20:54:52Z) - Visually-Aware Audio Captioning With Adaptive Audio-Visual Attention [54.4258176885084]
How to accurately recognize ambiguous sounds is a major challenge for audio captioning.
We propose visually-aware audio captioning, which makes use of visual information to help the description of ambiguous sounding objects.
Our proposed method achieves state-of-the-art results on machine translation metrics.
arXiv Detail & Related papers (2022-10-28T22:45:41Z) - Face-Dubbing++: Lip-Synchronous, Voice Preserving Translation of Videos [54.08224321456871]
The system is designed to combine multiple component models and produces a video of the original speaker speaking in the target language.
The pipeline starts with automatic speech recognition including emphasis detection, followed by a translation model.
The resulting synthetic voice is then mapped back to the original speakers' voice using a voice conversion model.
arXiv Detail & Related papers (2022-06-09T14:15:37Z) - Unsupervised Audiovisual Synthesis via Exemplar Autoencoders [59.13989658692953]
We present an unsupervised approach that converts the input speech of any individual into audiovisual streams of potentially-infinitely many output speakers.
We use Exemplar Autoencoders to learn the voice, stylistic prosody, and visual appearance of a specific target speech exemplar.
arXiv Detail & Related papers (2020-01-13T18:56:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.