SC VALL-E: Style-Controllable Zero-Shot Text to Speech Synthesizer
- URL: http://arxiv.org/abs/2307.10550v1
- Date: Thu, 20 Jul 2023 03:28:06 GMT
- Title: SC VALL-E: Style-Controllable Zero-Shot Text to Speech Synthesizer
- Authors: Daegyeom Kim, Seongho Hong, and Yong-Hoon Choi
- Abstract summary: Expressive speech synthesis models are trained by adding corpora with diverse speakers, various emotions, and different speaking styles to the dataset.
In this paper, we propose a style control (SC) VALL-E model based on the neural language model (called VALL-E)
The proposed SC VALL-E takes input from text sentences and prompt audio and is designed to generate controllable speech.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Expressive speech synthesis models are trained by adding corpora with diverse
speakers, various emotions, and different speaking styles to the dataset, in
order to control various characteristics of speech and generate the desired
voice. In this paper, we propose a style control (SC) VALL-E model based on the
neural codec language model (called VALL-E), which follows the structure of the
generative pretrained transformer 3 (GPT-3). The proposed SC VALL-E takes input
from text sentences and prompt audio and is designed to generate controllable
speech by not simply mimicking the characteristics of the prompt audio but by
controlling the attributes to produce diverse voices. We identify tokens in the
style embedding matrix of the newly designed style network that represent
attributes such as emotion, speaking rate, pitch, and voice intensity, and
design a model that can control these attributes. To evaluate the performance
of SC VALL-E, we conduct comparative experiments with three representative
expressive speech synthesis models: global style token (GST) Tacotron2,
variational autoencoder (VAE) Tacotron2, and original VALL-E. We measure word
error rate (WER), F0 voiced error (FVE), and F0 gross pitch error (F0GPE) as
evaluation metrics to assess the accuracy of generated sentences. For comparing
the quality of synthesized speech, we measure comparative mean option score
(CMOS) and similarity mean option score (SMOS). To evaluate the style control
ability of the generated speech, we observe the changes in F0 and
mel-spectrogram by modifying the trained tokens. When using prompt audio that
is not present in the training data, SC VALL-E generates a variety of
expressive sounds and demonstrates competitive performance compared to the
existing models. Our implementation, pretrained models, and audio samples are
located on GitHub.
Related papers
- CosyVoice 3: Towards In-the-wild Speech Generation via Scaling-up and Post-training [70.31925012315064]
We present CosyVoice 3, an improved model designed for zero-shot multilingual speech synthesis in the wild.<n>Key features of CosyVoice 3 include a novel speech tokenizer to improve prosody naturalness.<n>Data is expanded from ten thousand hours to one million hours, encompassing 9 languages and 18 Chinese dialects.
arXiv Detail & Related papers (2025-05-23T07:55:21Z) - Takin: A Cohort of Superior Quality Zero-shot Speech Generation Models [13.420522975106536]
Takin AudioLLM is a series of techniques and models, mainly including Takin TTS, Takin VC, and Takin Morphing, specifically designed for audiobook production.
These models are capable of zero-shot speech production, generating high-quality speech that is nearly indistinguishable from real human speech.
arXiv Detail & Related papers (2024-09-18T17:03:12Z) - Multi-modal Adversarial Training for Zero-Shot Voice Cloning [9.823246184635103]
We propose a Transformer encoder-decoder architecture to conditionally discriminate between real and generated speech features.
We introduce our novel adversarial training technique by applying it to a FastSpeech2 acoustic model and training on Libriheavy, a large multi-speaker dataset.
Our model achieves improvements over the baseline in terms of speech quality and speaker similarity.
arXiv Detail & Related papers (2024-08-28T16:30:41Z) - CosyVoice: A Scalable Multilingual Zero-shot Text-to-speech Synthesizer based on Supervised Semantic Tokens [49.569695524535454]
We propose to represent speech with supervised semantic tokens, which are derived from a multilingual speech recognition model by inserting vector quantization into the encoder.
Based on the tokens, we further propose a scalable zero-shot TTS synthesizer, CosyVoice, which consists of an LLM for text-to-token generation and a conditional flow matching model for token-to-speech synthesis.
arXiv Detail & Related papers (2024-07-07T15:16:19Z) - Prompt-Singer: Controllable Singing-Voice-Synthesis with Natural Language Prompt [50.25271407721519]
We propose Prompt-Singer, the first SVS method that enables attribute controlling on singer gender, vocal range and volume with natural language.
We adopt a model architecture based on a decoder-only transformer with a multi-scale hierarchy, and design a range-melody decoupled pitch representation.
Experiments show that our model achieves favorable controlling ability and audio quality.
arXiv Detail & Related papers (2024-03-18T13:39:05Z) - Natural language guidance of high-fidelity text-to-speech with synthetic
annotations [13.642358232817342]
We propose a scalable method for labeling various aspects of speaker identity, style, and recording conditions.
We then apply this method to a 45k hour dataset, which we use to train a speech language model.
Our results demonstrate high-fidelity speech generation in a diverse range of accents, prosodic styles, channel conditions, and acoustic conditions.
arXiv Detail & Related papers (2024-02-02T21:29:34Z) - TextrolSpeech: A Text Style Control Speech Corpus With Codec Language
Text-to-Speech Models [51.529485094900934]
We propose TextrolSpeech, which is the first large-scale speech emotion dataset annotated with rich text attributes.
We introduce a multi-stage prompt programming approach that effectively utilizes the GPT model for generating natural style descriptions in large volumes.
To address the need for generating audio with greater style diversity, we propose an efficient architecture called Salle.
arXiv Detail & Related papers (2023-08-28T09:06:32Z) - Mega-TTS: Zero-Shot Text-to-Speech at Scale with Intrinsic Inductive
Bias [71.94109664001952]
Mega-TTS is a novel zero-shot TTS system that is trained with large-scale wild data.
We show that Mega-TTS surpasses state-of-the-art TTS systems on zero-shot TTS speech editing, and cross-lingual TTS tasks.
arXiv Detail & Related papers (2023-06-06T08:54:49Z) - Unsupervised TTS Acoustic Modeling for TTS with Conditional Disentangled Sequential VAE [36.50265124324876]
We propose a novel unsupervised text-to-speech acoustic model training scheme, named UTTS, which does not require text-audio pairs.
The framework offers a flexible choice of a speaker's duration model, timbre feature (identity) and content for TTS inference.
Experiments demonstrate that UTTS can synthesize speech of high naturalness and intelligibility measured by human and objective evaluations.
arXiv Detail & Related papers (2022-06-06T11:51:22Z) - GenerSpeech: Towards Style Transfer for Generalizable Out-Of-Domain
Text-to-Speech Synthesis [68.42632589736881]
This paper proposes GenerSpeech, a text-to-speech model towards high-fidelity zero-shot style transfer of OOD custom voice.
GenerSpeech decomposes the speech variation into the style-agnostic and style-specific parts by introducing two components.
Our evaluations on zero-shot style transfer demonstrate that GenerSpeech surpasses the state-of-the-art models in terms of audio quality and style similarity.
arXiv Detail & Related papers (2022-05-15T08:16:02Z) - Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation [63.561944239071615]
StyleSpeech is a new TTS model which synthesizes high-quality speech and adapts to new speakers.
With SALN, our model effectively synthesizes speech in the style of the target speaker even from single speech audio.
We extend it to Meta-StyleSpeech by introducing two discriminators trained with style prototypes, and performing episodic training.
arXiv Detail & Related papers (2021-06-06T15:34:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.