Large-scale unsupervised audio pre-training for video-to-speech
synthesis
- URL: http://arxiv.org/abs/2306.15464v2
- Date: Mon, 31 Jul 2023 12:09:18 GMT
- Title: Large-scale unsupervised audio pre-training for video-to-speech
synthesis
- Authors: Triantafyllos Kefalas, Yannis Panagakis, Maja Pantic
- Abstract summary: Video-to-speech synthesis is the task of reconstructing the speech signal from a silent video of a speaker.
In this paper we propose to train encoder-decoder models on more than 3,500 hours of audio data at 24kHz.
We then use the pre-trained decoders to initialize the audio decoders for the video-to-speech synthesis task.
- Score: 64.86087257004883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video-to-speech synthesis is the task of reconstructing the speech signal
from a silent video of a speaker. Most established approaches to date involve a
two-step process, whereby an intermediate representation from the video, such
as a spectrogram, is extracted first and then passed to a vocoder to produce
the raw audio. Some recent work has focused on end-to-end synthesis, whereby
the generation of raw audio and any intermediate representations is performed
jointly. All such approaches involve training on data from almost exclusively
audio-visual datasets, i.e. every audio sample has a corresponding video
sample. This precludes the use of abundant audio-only datasets which may not
have a corresponding visual modality (e.g. audiobooks, radio podcasts, speech
recognition datasets etc.), as well as audio-only architectures that have been
developed by the audio machine learning community over the years. In this paper
we propose to train encoder-decoder models on more than 3,500 hours of audio
data at 24kHz, and then use the pre-trained decoders to initialize the audio
decoders for the video-to-speech synthesis task. The pre-training step uses
audio samples only and does not require labels or corresponding samples from
other modalities (visual, text). We demonstrate that this pre-training step
improves the reconstructed speech and that it is an unexplored way to improve
the quality of the generator in a cross-modal task while only requiring samples
from one of the modalities. We conduct experiments using both raw audio and mel
spectrograms as target outputs and benchmark our models with existing work.
Related papers
- Audio-visual video-to-speech synthesis with synthesized input audio [64.86087257004883]
We investigate the effect of using video and audio inputs for video-to-speech synthesis during both training and inference.
In particular, we use pre-trained video-to-speech models to synthesize the missing speech signals and then train an audio-visual-to-speech synthesis model, using both the silent video and the synthesized speech as inputs, to predict the final reconstructed speech.
arXiv Detail & Related papers (2023-07-31T11:39:05Z) - Exploring the Role of Audio in Video Captioning [59.679122191706426]
We present an audio-visual framework, which aims to fully exploit the potential of the audio modality for captioning.
We propose new local-global fusion mechanisms to improve information exchange across audio and video.
arXiv Detail & Related papers (2023-06-21T20:54:52Z) - CLIPSonic: Text-to-Audio Synthesis with Unlabeled Videos and Pretrained
Language-Vision Models [50.42886595228255]
We propose to learn the desired text-audio correspondence by leveraging the visual modality as a bridge.
We train a conditional diffusion model to generate the audio track of a video, given a video frame encoded by a pretrained contrastive language-image pretraining model.
arXiv Detail & Related papers (2023-06-16T05:42:01Z) - End-to-End Video-To-Speech Synthesis using Generative Adversarial
Networks [54.43697805589634]
We propose a new end-to-end video-to-speech model based on Generative Adversarial Networks (GANs)
Our model consists of an encoder-decoder architecture that receives raw video as input and generates speech.
We show that this model is able to reconstruct speech with remarkable realism for constrained datasets such as GRID.
arXiv Detail & Related papers (2021-04-27T17:12:30Z) - Unsupervised Audiovisual Synthesis via Exemplar Autoencoders [59.13989658692953]
We present an unsupervised approach that converts the input speech of any individual into audiovisual streams of potentially-infinitely many output speakers.
We use Exemplar Autoencoders to learn the voice, stylistic prosody, and visual appearance of a specific target speech exemplar.
arXiv Detail & Related papers (2020-01-13T18:56:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.