Towards Improving the Expressiveness of Singing Voice Synthesis with
BERT Derived Semantic Information
- URL: http://arxiv.org/abs/2308.16836v1
- Date: Thu, 31 Aug 2023 16:12:01 GMT
- Title: Towards Improving the Expressiveness of Singing Voice Synthesis with
BERT Derived Semantic Information
- Authors: Shaohuan Zhou, Shun Lei, Weiya You, Deyi Tuo, Yuren You, Zhiyong Wu,
Shiyin Kang, Helen Meng
- Abstract summary: This paper presents an end-to-end high-quality singing voice synthesis (SVS) system that uses bidirectional encoder representation from Transformers (BERT) derived semantic embeddings.
The proposed SVS system can produce singing voice with higher-quality outperforming VISinger.
- Score: 51.02264447897833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents an end-to-end high-quality singing voice synthesis (SVS)
system that uses bidirectional encoder representation from Transformers (BERT)
derived semantic embeddings to improve the expressiveness of the synthesized
singing voice. Based on the main architecture of recently proposed VISinger, we
put forward several specific designs for expressive singing voice synthesis.
First, different from the previous SVS models, we use text representation of
lyrics extracted from pre-trained BERT as additional input to the model. The
representation contains information about semantics of the lyrics, which could
help SVS system produce more expressive and natural voice. Second, we further
introduce an energy predictor to stabilize the synthesized voice and model the
wider range of energy variations that also contribute to the expressiveness of
singing voice. Last but not the least, to attenuate the off-key issues, the
pitch predictor is re-designed to predict the real to note pitch ratio. Both
objective and subjective experimental results indicate that the proposed SVS
system can produce singing voice with higher-quality outperforming VISinger.
Related papers
- Prompt-Singer: Controllable Singing-Voice-Synthesis with Natural Language Prompt [50.25271407721519]
We propose Prompt-Singer, the first SVS method that enables attribute controlling on singer gender, vocal range and volume with natural language.
We adopt a model architecture based on a decoder-only transformer with a multi-scale hierarchy, and design a range-melody decoupled pitch representation.
Experiments show that our model achieves favorable controlling ability and audio quality.
arXiv Detail & Related papers (2024-03-18T13:39:05Z) - StyleSinger: Style Transfer for Out-of-Domain Singing Voice Synthesis [63.18764165357298]
Style transfer for out-of-domain singing voice synthesis (SVS) focuses on generating high-quality singing voices with unseen styles.
StyleSinger is the first singing voice synthesis model for zero-shot style transfer of out-of-domain reference singing voice samples.
Our evaluations in zero-shot style transfer undeniably establish that StyleSinger outperforms baseline models in both audio quality and similarity to the reference singing voice samples.
arXiv Detail & Related papers (2023-12-17T15:26:16Z) - Enhancing the vocal range of single-speaker singing voice synthesis with
melody-unsupervised pre-training [82.94349771571642]
This work proposes a melody-unsupervised multi-speaker pre-training method to enhance the vocal range of the single-speaker.
It is the first to introduce a differentiable duration regulator to improve the rhythm naturalness of the synthesized voice.
Experimental results verify that the proposed SVS system outperforms the baseline on both sound quality and naturalness.
arXiv Detail & Related papers (2023-09-01T06:40:41Z) - Make-A-Voice: Unified Voice Synthesis With Discrete Representation [77.3998611565557]
Make-A-Voice is a unified framework for synthesizing and manipulating voice signals from discrete representations.
We show that Make-A-Voice exhibits superior audio quality and style similarity compared with competitive baseline models.
arXiv Detail & Related papers (2023-05-30T17:59:26Z) - Towards High-fidelity Singing Voice Conversion with Acoustic Reference
and Contrastive Predictive Coding [6.278338686038089]
phonetic posteriorgrams based methods have been quite popular in non-parallel singing voice conversion systems.
Due to the lack of acoustic information in PPGs, style and naturalness of the converted singing voices are still limited.
Our proposed model can significantly improve the naturalness of converted singing voices and the similarity with the target singer.
arXiv Detail & Related papers (2021-10-10T10:27:20Z) - Sinsy: A Deep Neural Network-Based Singing Voice Synthesis System [25.573552964889963]
This paper presents Sinsy, a deep neural network (DNN)-based singing voice synthesis (SVS) system.
The proposed system is composed of four modules: a time-lag model, a duration model, an acoustic model, and a vocoder.
Experimental results show our system can synthesize a singing voice with better timing, more natural vibrato, and correct pitch.
arXiv Detail & Related papers (2021-08-05T17:59:58Z) - DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis [53.19363127760314]
DiffSinger is a parameterized Markov chain which iteratively converts the noise into mel-spectrogram conditioned on the music score.
The evaluations conducted on the Chinese singing dataset demonstrate that DiffSinger outperforms state-of-the-art SVS work with a notable margin.
arXiv Detail & Related papers (2021-05-06T05:21:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.