Human Voice Pitch Estimation: A Convolutional Network with Auto-Labeled
and Synthetic Data
- URL: http://arxiv.org/abs/2308.07170v2
- Date: Sun, 17 Dec 2023 17:46:27 GMT
- Title: Human Voice Pitch Estimation: A Convolutional Network with Auto-Labeled
and Synthetic Data
- Authors: Jeremy Cochoy
- Abstract summary: We present a specialized convolutional neural network designed for pitch extraction.
Our approach combines synthetic data with auto-labeled acapella sung audio, creating a robust training environment.
This work paves the way for enhanced pitch extraction in both music and voice settings.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the domain of music and sound processing, pitch extraction plays a pivotal
role. Our research presents a specialized convolutional neural network designed
for pitch extraction, particularly from the human singing voice in acapella
performances. Notably, our approach combines synthetic data with auto-labeled
acapella sung audio, creating a robust training environment. Evaluation across
datasets comprising synthetic sounds, opera recordings, and time-stretched
vowels demonstrates its efficacy. This work paves the way for enhanced pitch
extraction in both music and voice settings.
Related papers
- Prompt-Singer: Controllable Singing-Voice-Synthesis with Natural Language Prompt [50.25271407721519]
We propose Prompt-Singer, the first SVS method that enables attribute controlling on singer gender, vocal range and volume with natural language.
We adopt a model architecture based on a decoder-only transformer with a multi-scale hierarchy, and design a range-melody decoupled pitch representation.
Experiments show that our model achieves favorable controlling ability and audio quality.
arXiv Detail & Related papers (2024-03-18T13:39:05Z) - Spectrogram-Based Detection of Auto-Tuned Vocals in Music Recordings [9.646498710102174]
This study introduces a data-driven approach leveraging triplet networks for the detection of Auto-Tuned songs.
The experimental results demonstrate the superiority of the proposed method in both accuracy and robustness compared to Rawnet2, an end-to-end model proposed for anti-spoofing.
arXiv Detail & Related papers (2024-03-08T15:19:26Z) - Enhancing the vocal range of single-speaker singing voice synthesis with
melody-unsupervised pre-training [82.94349771571642]
This work proposes a melody-unsupervised multi-speaker pre-training method to enhance the vocal range of the single-speaker.
It is the first to introduce a differentiable duration regulator to improve the rhythm naturalness of the synthesized voice.
Experimental results verify that the proposed SVS system outperforms the baseline on both sound quality and naturalness.
arXiv Detail & Related papers (2023-09-01T06:40:41Z) - Make-A-Voice: Unified Voice Synthesis With Discrete Representation [77.3998611565557]
Make-A-Voice is a unified framework for synthesizing and manipulating voice signals from discrete representations.
We show that Make-A-Voice exhibits superior audio quality and style similarity compared with competitive baseline models.
arXiv Detail & Related papers (2023-05-30T17:59:26Z) - Deep Performer: Score-to-Audio Music Performance Synthesis [30.95307878579825]
Deep Performer is a novel system for score-to-audio music performance synthesis.
Unlike speech, music often contains polyphony and long notes.
We show that our proposed model can synthesize music with clear polyphony and harmonic structures.
arXiv Detail & Related papers (2022-02-12T10:36:52Z) - Rapping-Singing Voice Synthesis based on Phoneme-level Prosody Control [47.33830090185952]
A text-to-rapping/singing system is introduced, which can be adapted to any speaker's voice.
It utilizes a Tacotron-based multispeaker acoustic model trained on read-only speech data.
Results show that the proposed approach can produce high quality rapping/singing voice with increased naturalness.
arXiv Detail & Related papers (2021-11-17T14:31:55Z) - An Empirical Study on End-to-End Singing Voice Synthesis with
Encoder-Decoder Architectures [11.440111473570196]
We use encoder-decoder neural models and a number of vocoders to achieve singing voice synthesis.
We conduct experiments to demonstrate that the models can be trained using voice data with pitch information, lyrics and beat information.
arXiv Detail & Related papers (2021-08-06T08:51:16Z) - Unsupervised Cross-Domain Singing Voice Conversion [105.1021715879586]
We present a wav-to-wav generative model for the task of singing voice conversion from any identity.
Our method utilizes both an acoustic model, trained for the task of automatic speech recognition, together with melody extracted features to drive a waveform-based generator.
arXiv Detail & Related papers (2020-08-06T18:29:11Z) - Learning to Denoise Historical Music [30.165194151843835]
We propose an audio-to-audio neural network model that learns to denoise old music recordings.
The network is trained with both reconstruction and adversarial objectives on a noisy music dataset.
Our results show that the proposed method is effective in removing noise, while preserving the quality and details of the original music.
arXiv Detail & Related papers (2020-08-05T10:05:44Z) - Vector-Quantized Timbre Representation [53.828476137089325]
This paper targets a more flexible synthesis of an individual timbre by learning an approximate decomposition of its spectral properties with a set of generative features.
We introduce an auto-encoder with a discrete latent space that is disentangled from loudness in order to learn a quantized representation of a given timbre distribution.
We detail results for translating audio between orchestral instruments and singing voice, as well as transfers from vocal imitations to instruments.
arXiv Detail & Related papers (2020-07-13T12:35:45Z) - AutoFoley: Artificial Synthesis of Synchronized Sound Tracks for Silent
Videos with Deep Learning [5.33024001730262]
We present AutoFoley, a fully-automated deep learning tool that can be used to synthesize a representative audio track for videos.
AutoFoley can be used in the applications where there is either no corresponding audio file associated with the video or in cases where there is a need to identify critical scenarios.
Our experiments show that the synthesized sounds are realistically portrayed with accurate temporal synchronization of the associated visual inputs.
arXiv Detail & Related papers (2020-02-21T09:08:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.