On Prosody Modeling for ASR+TTS based Voice Conversion
- URL: http://arxiv.org/abs/2107.09477v1
- Date: Tue, 20 Jul 2021 13:30:23 GMT
- Title: On Prosody Modeling for ASR+TTS based Voice Conversion
- Authors: Wen-Chin Huang, Tomoki Hayashi, Xinjian Li, Shinji Watanabe, Tomoki
Toda
- Abstract summary: In voice conversion, an approach showing promising results in the latest voice conversion challenge (VCC) 2020 is to first use an automatic speech recognition (ASR) model to transcribe the source speech into the underlying linguistic contents.
Such a paradigm, referred to as ASR+TTS, overlooks the modeling of prosody, which plays an important role in speech naturalness and conversion similarity.
We propose to directly predict prosody from the linguistic representation in a target-speaker-dependent manner, referred to as target text prediction (TTP)
- Score: 82.65378387724641
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In voice conversion (VC), an approach showing promising results in the latest
voice conversion challenge (VCC) 2020 is to first use an automatic speech
recognition (ASR) model to transcribe the source speech into the underlying
linguistic contents; these are then used as input by a text-to-speech (TTS)
system to generate the converted speech. Such a paradigm, referred to as
ASR+TTS, overlooks the modeling of prosody, which plays an important role in
speech naturalness and conversion similarity. Although some researchers have
considered transferring prosodic clues from the source speech, there arises a
speaker mismatch during training and conversion. To address this issue, in this
work, we propose to directly predict prosody from the linguistic representation
in a target-speaker-dependent manner, referred to as target text prediction
(TTP). We evaluate both methods on the VCC2020 benchmark and consider different
linguistic representations. The results demonstrate the effectiveness of TTP in
both objective and subjective evaluations.
Related papers
- Learning Speech Representation From Contrastive Token-Acoustic
Pretraining [57.08426714676043]
We propose "Contrastive Token-Acoustic Pretraining (CTAP)", which uses two encoders to bring phoneme and speech into a joint multimodal space.
The proposed CTAP model is trained on 210k speech and phoneme pairs, achieving minimally-supervised TTS, VC, and ASR.
arXiv Detail & Related papers (2023-09-01T12:35:43Z) - Cross-lingual Text-To-Speech with Flow-based Voice Conversion for
Improved Pronunciation [11.336431583289382]
This paper presents a method for end-to-end cross-lingual text-to-speech.
It aims to preserve the target language's pronunciation regardless of the original speaker's language.
arXiv Detail & Related papers (2022-10-31T12:44:53Z) - Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration [62.75234183218897]
We propose a one-stage context-aware framework to generate natural and coherent target speech without any training data of the speaker.
We generate the mel-spectrogram of the edited speech with a transformer-based decoder.
It outperforms a recent zero-shot TTS engine by a large margin.
arXiv Detail & Related papers (2021-09-12T04:17:53Z) - VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised
Speech Representation Disentanglement for One-shot Voice Conversion [54.29557210925752]
One-shot voice conversion can be effectively achieved by speech representation disentanglement.
We employ vector quantization (VQ) for content encoding and introduce mutual information (MI) as the correlation metric during training.
Experimental results reflect the superiority of the proposed method in learning effective disentangled speech representations.
arXiv Detail & Related papers (2021-06-18T13:50:38Z) - An Adaptive Learning based Generative Adversarial Network for One-To-One
Voice Conversion [9.703390665821463]
We propose an adaptive learning-based GAN model called ALGAN-VC for an efficient one-to-one VC of speakers.
The model is tested on Voice Conversion Challenge (VCC) 2016, 2018, and 2020 datasets as well as on our self-prepared speech dataset.
A subjective and objective evaluation of the generated speech samples indicated that the proposed model elegantly performed the voice conversion task.
arXiv Detail & Related papers (2021-04-25T13:44:32Z) - Learning Explicit Prosody Models and Deep Speaker Embeddings for
Atypical Voice Conversion [60.808838088376675]
We propose a VC system with explicit prosodic modelling and deep speaker embedding learning.
A prosody corrector takes in phoneme embeddings to infer typical phoneme duration and pitch values.
A conversion model takes phoneme embeddings and typical prosody features as inputs to generate the converted speech.
arXiv Detail & Related papers (2020-11-03T13:08:53Z) - The Sequence-to-Sequence Baseline for the Voice Conversion Challenge
2020: Cascading ASR and TTS [66.06385966689965]
This paper presents the sequence-to-sequence (seq2seq) baseline system for the voice conversion challenge (VCC) 2020.
We consider a naive approach for voice conversion (VC), which is to first transcribe the input speech with an automatic speech recognition (ASR) model.
We revisit this method under a sequence-to-sequence (seq2seq) framework by utilizing ESPnet, an open-source end-to-end speech processing toolkit.
arXiv Detail & Related papers (2020-10-06T02:27:38Z) - Transfer Learning from Monolingual ASR to Transcription-free
Cross-lingual Voice Conversion [0.0]
Cross-lingual voice conversion is a task that aims to synthesize target voices with the same content while source and target speakers speak in different languages.
In this paper, we focus on knowledge transfer from monolin-gual ASR to cross-lingual VC.
We successfully address cross-lingual VC without any transcription or language-specific knowledge for foreign speech.
arXiv Detail & Related papers (2020-09-30T13:44:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.