A Melody-Unsupervision Model for Singing Voice Synthesis
- URL: http://arxiv.org/abs/2110.06546v1
- Date: Wed, 13 Oct 2021 07:42:35 GMT
- Title: A Melody-Unsupervision Model for Singing Voice Synthesis
- Authors: Soonbeom Choi and Juhan Nam
- Abstract summary: We propose a melody-unsupervision model that requires only audio-and-lyrics pairs without temporal alignment in training time.
We show that the proposed model is capable of being trained with speech audio and text labels but can generate singing voice in inference time.
- Score: 9.137554315375919
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent studies in singing voice synthesis have achieved high-quality results
leveraging advances in text-to-speech models based on deep neural networks. One
of the main issues in training singing voice synthesis models is that they
require melody and lyric labels to be temporally aligned with audio data. The
temporal alignment is a time-exhausting manual work in preparing for the
training data. To address the issue, we propose a melody-unsupervision model
that requires only audio-and-lyrics pairs without temporal alignment in
training time but generates singing voice audio given a melody and lyrics input
in inference time. The proposed model is composed of a phoneme classifier and a
singing voice generator jointly trained in an end-to-end manner. The model can
be fine-tuned by adjusting the amount of supervision with temporally aligned
melody labels. Through experiments in melody-unsupervision and semi-supervision
settings, we compare the audio quality of synthesized singing voice. We also
show that the proposed model is capable of being trained with speech audio and
text labels but can generate singing voice in inference time.
Related papers
- Accompanied Singing Voice Synthesis with Fully Text-controlled Melody [61.147446955297625]
Text-to-song (TTSong) is a music generation task that synthesizes accompanied singing voices.
We present MelodyLM, the first TTSong model that generates high-quality song pieces with fully text-controlled melodies.
arXiv Detail & Related papers (2024-07-02T08:23:38Z) - Prompt-Singer: Controllable Singing-Voice-Synthesis with Natural Language Prompt [50.25271407721519]
We propose Prompt-Singer, the first SVS method that enables attribute controlling on singer gender, vocal range and volume with natural language.
We adopt a model architecture based on a decoder-only transformer with a multi-scale hierarchy, and design a range-melody decoupled pitch representation.
Experiments show that our model achieves favorable controlling ability and audio quality.
arXiv Detail & Related papers (2024-03-18T13:39:05Z) - Enhancing the vocal range of single-speaker singing voice synthesis with
melody-unsupervised pre-training [82.94349771571642]
This work proposes a melody-unsupervised multi-speaker pre-training method to enhance the vocal range of the single-speaker.
It is the first to introduce a differentiable duration regulator to improve the rhythm naturalness of the synthesized voice.
Experimental results verify that the proposed SVS system outperforms the baseline on both sound quality and naturalness.
arXiv Detail & Related papers (2023-09-01T06:40:41Z) - Make-A-Voice: Unified Voice Synthesis With Discrete Representation [77.3998611565557]
Make-A-Voice is a unified framework for synthesizing and manipulating voice signals from discrete representations.
We show that Make-A-Voice exhibits superior audio quality and style similarity compared with competitive baseline models.
arXiv Detail & Related papers (2023-05-30T17:59:26Z) - Karaoker: Alignment-free singing voice synthesis with speech training
data [3.9795908407245055]
Karaoker is a multispeaker Tacotron-based model conditioned on voice characteristic features.
The model is jointly conditioned with a single deep convolutional encoder on continuous data.
We extend the text-to-speech training objective with feature reconstruction, classification and speaker identification tasks.
arXiv Detail & Related papers (2022-04-08T15:33:59Z) - VocaLiST: An Audio-Visual Synchronisation Model for Lips and Voices [4.167459103689587]
We address the problem of lip-voice synchronisation in videos containing human face and voice.
Our approach is based on determining if the lips motion and the voice in a video are synchronised or not.
We propose an audio-visual cross-modal transformer-based model that outperforms several baseline models.
arXiv Detail & Related papers (2022-04-05T10:02:39Z) - Learning the Beauty in Songs: Neural Singing Voice Beautifier [69.21263011242907]
We are interested in a novel task, singing voice beautifying (SVB)
Given the singing voice of an amateur singer, SVB aims to improve the intonation and vocal tone of the voice, while keeping the content and vocal timbre.
We introduce Neural Singing Voice Beautifier (NSVB), the first generative model to solve the SVB task.
arXiv Detail & Related papers (2022-02-27T03:10:12Z) - An Empirical Study on End-to-End Singing Voice Synthesis with
Encoder-Decoder Architectures [11.440111473570196]
We use encoder-decoder neural models and a number of vocoders to achieve singing voice synthesis.
We conduct experiments to demonstrate that the models can be trained using voice data with pitch information, lyrics and beat information.
arXiv Detail & Related papers (2021-08-06T08:51:16Z) - Unsupervised Cross-Domain Singing Voice Conversion [105.1021715879586]
We present a wav-to-wav generative model for the task of singing voice conversion from any identity.
Our method utilizes both an acoustic model, trained for the task of automatic speech recognition, together with melody extracted features to drive a waveform-based generator.
arXiv Detail & Related papers (2020-08-06T18:29:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.