Enhancing the vocal range of single-speaker singing voice synthesis with
melody-unsupervised pre-training
- URL: http://arxiv.org/abs/2309.00284v1
- Date: Fri, 1 Sep 2023 06:40:41 GMT
- Title: Enhancing the vocal range of single-speaker singing voice synthesis with
melody-unsupervised pre-training
- Authors: Shaohuan Zhou, Xu Li, Zhiyong Wu, Ying Shan, Helen Meng
- Abstract summary: This work proposes a melody-unsupervised multi-speaker pre-training method to enhance the vocal range of the single-speaker.
It is the first to introduce a differentiable duration regulator to improve the rhythm naturalness of the synthesized voice.
Experimental results verify that the proposed SVS system outperforms the baseline on both sound quality and naturalness.
- Score: 82.94349771571642
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The single-speaker singing voice synthesis (SVS) usually underperforms at
pitch values that are out of the singer's vocal range or associated with
limited training samples. Based on our previous work, this work proposes a
melody-unsupervised multi-speaker pre-training method conducted on a
multi-singer dataset to enhance the vocal range of the single-speaker, while
not degrading the timbre similarity. This pre-training method can be deployed
to a large-scale multi-singer dataset, which only contains audio-and-lyrics
pairs without phonemic timing information and pitch annotation. Specifically,
in the pre-training step, we design a phoneme predictor to produce the
frame-level phoneme probability vectors as the phonemic timing information and
a speaker encoder to model the timbre variations of different singers, and
directly estimate the frame-level f0 values from the audio to provide the pitch
information. These pre-trained model parameters are delivered into the
fine-tuning step as prior knowledge to enhance the single speaker's vocal
range. Moreover, this work also contributes to improving the sound quality and
rhythm naturalness of the synthesized singing voices. It is the first to
introduce a differentiable duration regulator to improve the rhythm naturalness
of the synthesized voice, and a bi-directional flow model to improve the sound
quality. Experimental results verify that the proposed SVS system outperforms
the baseline on both sound quality and naturalness.
Related papers
- Prompt-Singer: Controllable Singing-Voice-Synthesis with Natural Language Prompt [50.25271407721519]
We propose Prompt-Singer, the first SVS method that enables attribute controlling on singer gender, vocal range and volume with natural language.
We adopt a model architecture based on a decoder-only transformer with a multi-scale hierarchy, and design a range-melody decoupled pitch representation.
Experiments show that our model achieves favorable controlling ability and audio quality.
arXiv Detail & Related papers (2024-03-18T13:39:05Z) - StyleSinger: Style Transfer for Out-of-Domain Singing Voice Synthesis [63.18764165357298]
Style transfer for out-of-domain singing voice synthesis (SVS) focuses on generating high-quality singing voices with unseen styles.
StyleSinger is the first singing voice synthesis model for zero-shot style transfer of out-of-domain reference singing voice samples.
Our evaluations in zero-shot style transfer undeniably establish that StyleSinger outperforms baseline models in both audio quality and similarity to the reference singing voice samples.
arXiv Detail & Related papers (2023-12-17T15:26:16Z) - Make-A-Voice: Unified Voice Synthesis With Discrete Representation [77.3998611565557]
Make-A-Voice is a unified framework for synthesizing and manipulating voice signals from discrete representations.
We show that Make-A-Voice exhibits superior audio quality and style similarity compared with competitive baseline models.
arXiv Detail & Related papers (2023-05-30T17:59:26Z) - Karaoker: Alignment-free singing voice synthesis with speech training
data [3.9795908407245055]
Karaoker is a multispeaker Tacotron-based model conditioned on voice characteristic features.
The model is jointly conditioned with a single deep convolutional encoder on continuous data.
We extend the text-to-speech training objective with feature reconstruction, classification and speaker identification tasks.
arXiv Detail & Related papers (2022-04-08T15:33:59Z) - Rapping-Singing Voice Synthesis based on Phoneme-level Prosody Control [47.33830090185952]
A text-to-rapping/singing system is introduced, which can be adapted to any speaker's voice.
It utilizes a Tacotron-based multispeaker acoustic model trained on read-only speech data.
Results show that the proposed approach can produce high quality rapping/singing voice with increased naturalness.
arXiv Detail & Related papers (2021-11-17T14:31:55Z) - A Melody-Unsupervision Model for Singing Voice Synthesis [9.137554315375919]
We propose a melody-unsupervision model that requires only audio-and-lyrics pairs without temporal alignment in training time.
We show that the proposed model is capable of being trained with speech audio and text labels but can generate singing voice in inference time.
arXiv Detail & Related papers (2021-10-13T07:42:35Z) - Unsupervised Cross-Domain Singing Voice Conversion [105.1021715879586]
We present a wav-to-wav generative model for the task of singing voice conversion from any identity.
Our method utilizes both an acoustic model, trained for the task of automatic speech recognition, together with melody extracted features to drive a waveform-based generator.
arXiv Detail & Related papers (2020-08-06T18:29:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.