Low-Resource Cross-Domain Singing Voice Synthesis via Reduced
Self-Supervised Speech Representations
- URL: http://arxiv.org/abs/2402.01520v1
- Date: Fri, 2 Feb 2024 16:06:24 GMT
- Title: Low-Resource Cross-Domain Singing Voice Synthesis via Reduced
Self-Supervised Speech Representations
- Authors: Panos Kakoulidis, Nikolaos Ellinas, Georgios Vamvoukakis, Myrsini
Christidou, Alexandra Vioni, Georgia Maniati, Junkwang Oh, Gunu Jho, Inchul
Hwang, Pirros Tsiakoulis, Aimilios Chalamandaris
- Abstract summary: Karaoker-SSL is a singing voice synthesis model that is trained only on text and speech data.
It does not utilize any singing data end-to-end, since its vocoder is also trained on speech data.
- Score: 41.410556997285326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a singing voice synthesis model, Karaoker-SSL, that
is trained only on text and speech data as a typical multi-speaker acoustic
model. It is a low-resource pipeline that does not utilize any singing data
end-to-end, since its vocoder is also trained on speech data. Karaoker-SSL is
conditioned by self-supervised speech representations in an unsupervised
manner. We preprocess these representations by selecting only a subset of their
task-correlated dimensions. The conditioning module is indirectly guided to
capture style information during training by multi-tasking. This is achieved
with a Conformer-based module, which predicts the pitch from the acoustic
model's output. Thus, Karaoker-SSL allows singing voice synthesis without
reliance on hand-crafted and domain-specific features. There are also no
requirements for text alignments or lyrics timestamps. To refine the voice
quality, we employ a U-Net discriminator that is conditioned on the target
speaker and follows a Diffusion GAN training scheme.
Related papers
- MakeSinger: A Semi-Supervised Training Method for Data-Efficient Singing Voice Synthesis via Classifier-free Diffusion Guidance [14.22941848955693]
MakeSinger is a semi-supervised training method for singing voice synthesis.
Our novel dual guiding mechanism gives text and pitch guidance on the reverse diffusion step.
We demonstrate that by adding Text-to-Speech (TTS) data in training, the model can synthesize the singing voices of TTS speakers even without their singing voices.
arXiv Detail & Related papers (2024-06-10T01:47:52Z) - Prompt-Singer: Controllable Singing-Voice-Synthesis with Natural Language Prompt [50.25271407721519]
We propose Prompt-Singer, the first SVS method that enables attribute controlling on singer gender, vocal range and volume with natural language.
We adopt a model architecture based on a decoder-only transformer with a multi-scale hierarchy, and design a range-melody decoupled pitch representation.
Experiments show that our model achieves favorable controlling ability and audio quality.
arXiv Detail & Related papers (2024-03-18T13:39:05Z) - Enhancing the vocal range of single-speaker singing voice synthesis with
melody-unsupervised pre-training [82.94349771571642]
This work proposes a melody-unsupervised multi-speaker pre-training method to enhance the vocal range of the single-speaker.
It is the first to introduce a differentiable duration regulator to improve the rhythm naturalness of the synthesized voice.
Experimental results verify that the proposed SVS system outperforms the baseline on both sound quality and naturalness.
arXiv Detail & Related papers (2023-09-01T06:40:41Z) - Karaoker: Alignment-free singing voice synthesis with speech training
data [3.9795908407245055]
Karaoker is a multispeaker Tacotron-based model conditioned on voice characteristic features.
The model is jointly conditioned with a single deep convolutional encoder on continuous data.
We extend the text-to-speech training objective with feature reconstruction, classification and speaker identification tasks.
arXiv Detail & Related papers (2022-04-08T15:33:59Z) - Rapping-Singing Voice Synthesis based on Phoneme-level Prosody Control [47.33830090185952]
A text-to-rapping/singing system is introduced, which can be adapted to any speaker's voice.
It utilizes a Tacotron-based multispeaker acoustic model trained on read-only speech data.
Results show that the proposed approach can produce high quality rapping/singing voice with increased naturalness.
arXiv Detail & Related papers (2021-11-17T14:31:55Z) - A Melody-Unsupervision Model for Singing Voice Synthesis [9.137554315375919]
We propose a melody-unsupervision model that requires only audio-and-lyrics pairs without temporal alignment in training time.
We show that the proposed model is capable of being trained with speech audio and text labels but can generate singing voice in inference time.
arXiv Detail & Related papers (2021-10-13T07:42:35Z) - Unsupervised Cross-Domain Singing Voice Conversion [105.1021715879586]
We present a wav-to-wav generative model for the task of singing voice conversion from any identity.
Our method utilizes both an acoustic model, trained for the task of automatic speech recognition, together with melody extracted features to drive a waveform-based generator.
arXiv Detail & Related papers (2020-08-06T18:29:11Z) - Audio ALBERT: A Lite BERT for Self-supervised Learning of Audio
Representation [51.37980448183019]
We propose Audio ALBERT, a lite version of the self-supervised speech representation model.
We show that Audio ALBERT is capable of achieving competitive performance with those huge models in the downstream tasks.
In probing experiments, we find that the latent representations encode richer information of both phoneme and speaker than that of the last layer.
arXiv Detail & Related papers (2020-05-18T10:42:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.