MBTFNet: Multi-Band Temporal-Frequency Neural Network For Singing Voice
Enhancement
- URL: http://arxiv.org/abs/2310.04369v1
- Date: Fri, 6 Oct 2023 16:44:47 GMT
- Title: MBTFNet: Multi-Band Temporal-Frequency Neural Network For Singing Voice
Enhancement
- Authors: Weiming Xu, Zhouxuan Chen, Zhili Tan, Shubo Lv, Runduo Han, Wenjiang
Zhou, Weifeng Zhao, Lei Xie
- Abstract summary: We propose a novel temporal-frequency neural network (MBTFNet) for singing voice enhancement.
MBTFNet removes background music, noise and even backing vocals from singing recordings.
Experiments show that our proposed model significantly outperforms several state-of-the-art SE and MSS models.
- Score: 8.782080886602145
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A typical neural speech enhancement (SE) approach mainly handles speech and
noise mixtures, which is not optimal for singing voice enhancement scenarios.
Music source separation (MSS) models treat vocals and various accompaniment
components equally, which may reduce performance compared to the model that
only considers vocal enhancement. In this paper, we propose a novel multi-band
temporal-frequency neural network (MBTFNet) for singing voice enhancement,
which particularly removes background music, noise and even backing vocals from
singing recordings. MBTFNet combines inter and intra-band modeling for better
processing of full-band signals. Dual-path modeling are introduced to expand
the receptive field of the model. We propose an implicit personalized
enhancement (IPE) stage based on signal-to-noise ratio (SNR) estimation, which
further improves the performance of MBTFNet. Experiments show that our proposed
model significantly outperforms several state-of-the-art SE and MSS models.
Related papers
- SPA-SVC: Self-supervised Pitch Augmentation for Singing Voice Conversion [12.454955437047573]
We propose a Self-supervised Pitch Augmentation method for Singing Voice Conversion (SPA-SVC)
We introduce a cycle pitch shifting training strategy and Structural Similarity Index (SSIM) loss into our SVC model, effectively enhancing its performance.
Experimental results on the public singing datasets M4Singer indicate that our proposed method significantly improves model performance.
arXiv Detail & Related papers (2024-06-09T08:34:01Z) - Multilingual Audio-Visual Speech Recognition with Hybrid CTC/RNN-T Fast Conformer [59.57249127943914]
We present a multilingual Audio-Visual Speech Recognition model incorporating several enhancements to improve performance and audio noise robustness.
We increase the amount of audio-visual training data for six distinct languages, generating automatic transcriptions of unlabelled multilingual datasets.
Our proposed model achieves new state-of-the-art performance on the LRS3 dataset, reaching WER of 0.8%.
arXiv Detail & Related papers (2024-03-14T01:16:32Z) - Enhancing the vocal range of single-speaker singing voice synthesis with
melody-unsupervised pre-training [82.94349771571642]
This work proposes a melody-unsupervised multi-speaker pre-training method to enhance the vocal range of the single-speaker.
It is the first to introduce a differentiable duration regulator to improve the rhythm naturalness of the synthesized voice.
Experimental results verify that the proposed SVS system outperforms the baseline on both sound quality and naturalness.
arXiv Detail & Related papers (2023-09-01T06:40:41Z) - Towards Improving the Expressiveness of Singing Voice Synthesis with
BERT Derived Semantic Information [51.02264447897833]
This paper presents an end-to-end high-quality singing voice synthesis (SVS) system that uses bidirectional encoder representation from Transformers (BERT) derived semantic embeddings.
The proposed SVS system can produce singing voice with higher-quality outperforming VISinger.
arXiv Detail & Related papers (2023-08-31T16:12:01Z) - Towards Improving Harmonic Sensitivity and Prediction Stability for
Singing Melody Extraction [36.45127093978295]
We propose an input feature modification and a training objective modification based on two assumptions.
To enhance the model's sensitivity on the trailing harmonics, we modify the Combined Frequency and Periodicity representation using discrete z-transform.
We apply these modifications to several models, including MSNet, FTANet, and a newly introduced model, PianoNet, modified from a piano transcription network.
arXiv Detail & Related papers (2023-08-04T21:59:40Z) - Hierarchical Diffusion Models for Singing Voice Neural Vocoder [21.118585353100634]
We propose a hierarchical diffusion model for singing voice neural vocoders.
Experimental results show that the proposed method produces high-quality singing voices for multiple singers.
arXiv Detail & Related papers (2022-10-14T04:30:09Z) - Music Source Separation with Band-split RNN [25.578400006180527]
We propose a frequency-domain model that splits the spectrogram of the mixture into subbands and perform interleaved band-level and sequence-level modeling.
The choices of the bandwidths of the subbands can be determined by a priori knowledge or expert knowledge on the characteristics of the target source.
Experiment results show that BSRNN trained only on MUSDB18-HQ dataset significantly outperforms several top-ranking models in Music Demixing (MDX) Challenge 2021.
arXiv Detail & Related papers (2022-09-30T01:49:52Z) - Sinsy: A Deep Neural Network-Based Singing Voice Synthesis System [25.573552964889963]
This paper presents Sinsy, a deep neural network (DNN)-based singing voice synthesis (SVS) system.
The proposed system is composed of four modules: a time-lag model, a duration model, an acoustic model, and a vocoder.
Experimental results show our system can synthesize a singing voice with better timing, more natural vibrato, and correct pitch.
arXiv Detail & Related papers (2021-08-05T17:59:58Z) - DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis [53.19363127760314]
DiffSinger is a parameterized Markov chain which iteratively converts the noise into mel-spectrogram conditioned on the music score.
The evaluations conducted on the Chinese singing dataset demonstrate that DiffSinger outperforms state-of-the-art SVS work with a notable margin.
arXiv Detail & Related papers (2021-05-06T05:21:42Z) - HiFiSinger: Towards High-Fidelity Neural Singing Voice Synthesis [153.48507947322886]
HiFiSinger is an SVS system towards high-fidelity singing voice.
It consists of a FastSpeech based acoustic model and a Parallel WaveGAN based vocoder.
Experiment results show that HiFiSinger synthesizes high-fidelity singing voices with much higher quality.
arXiv Detail & Related papers (2020-09-03T16:31:02Z) - Unsupervised Cross-Domain Singing Voice Conversion [105.1021715879586]
We present a wav-to-wav generative model for the task of singing voice conversion from any identity.
Our method utilizes both an acoustic model, trained for the task of automatic speech recognition, together with melody extracted features to drive a waveform-based generator.
arXiv Detail & Related papers (2020-08-06T18:29:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.