Advances in Speech Vocoding for Text-to-Speech with Continuous
Parameters
- URL: http://arxiv.org/abs/2106.10481v1
- Date: Sat, 19 Jun 2021 12:05:01 GMT
- Title: Advances in Speech Vocoding for Text-to-Speech with Continuous
Parameters
- Authors: Mohammed Salah Al-Radhi, Tam\'as G\'abor Csap\'o, and G\'eza N\'emeth
- Abstract summary: This paper presents new techniques in a continuous vocoder, that is all features are continuous and presents a flexible speech synthesis system.
New continuous noise masking based on the phase distortion is proposed to eliminate the perceptual impact of the residual noise.
Bidirectional long short-term memory (LSTM) and gated recurrent unit (GRU) are studied and applied to model continuous parameters for more natural-sounding like a human.
- Score: 2.6572330982240935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vocoders received renewed attention as main components in statistical
parametric text-to-speech (TTS) synthesis and speech transformation systems.
Even though there are vocoding techniques give almost accepted synthesized
speech, their high computational complexity and irregular structures are still
considered challenging concerns, which yield a variety of voice quality
degradation. Therefore, this paper presents new techniques in a continuous
vocoder, that is all features are continuous and presents a flexible speech
synthesis system. First, a new continuous noise masking based on the phase
distortion is proposed to eliminate the perceptual impact of the residual noise
and letting an accurate reconstruction of noise characteristics. Second, we
addressed the need of neural sequence to sequence modeling approach for the
task of TTS based on recurrent networks. Bidirectional long short-term memory
(LSTM) and gated recurrent unit (GRU) are studied and applied to model
continuous parameters for more natural-sounding like a human. The evaluation
results proved that the proposed model achieves the state-of-the-art
performance of the speech synthesis compared with the other traditional
methods.
Related papers
- Robust AI-Synthesized Speech Detection Using Feature Decomposition Learning and Synthesizer Feature Augmentation [52.0893266767733]
We propose a robust deepfake speech detection method that employs feature decomposition to learn synthesizer-independent content features.
To enhance the model's robustness to different synthesizer characteristics, we propose a synthesizer feature augmentation strategy.
arXiv Detail & Related papers (2024-11-14T03:57:21Z) - Utilizing Neural Transducers for Two-Stage Text-to-Speech via Semantic
Token Prediction [15.72317249204736]
We propose a novel text-to-speech (TTS) framework centered around a neural transducer.
Our approach divides the whole TTS pipeline into semantic-level sequence-to-sequence (seq2seq) modeling and fine-grained acoustic modeling stages.
Our experimental results on zero-shot adaptive TTS demonstrate that our model surpasses the baseline in terms of speech quality and speaker similarity.
arXiv Detail & Related papers (2024-01-03T02:03:36Z) - High-Fidelity Speech Synthesis with Minimal Supervision: All Using
Diffusion Models [56.00939852727501]
Minimally-supervised speech synthesis decouples TTS by combining two types of discrete speech representations.
Non-autoregressive framework enhances controllability, and duration diffusion model enables diversified prosodic expression.
arXiv Detail & Related papers (2023-09-27T09:27:03Z) - TranSpeech: Speech-to-Speech Translation With Bilateral Perturbation [61.564874831498145]
TranSpeech is a speech-to-speech translation model with bilateral perturbation.
We establish a non-autoregressive S2ST technique, which repeatedly masks and predicts unit choices.
TranSpeech shows a significant improvement in inference latency, enabling speedup up to 21.4x than autoregressive technique.
arXiv Detail & Related papers (2022-05-25T06:34:14Z) - Discretization and Re-synthesis: an alternative method to solve the
Cocktail Party Problem [65.25725367771075]
This study demonstrates, for the first time, that the synthesis-based approach can also perform well on this problem.
Specifically, we propose a novel speech separation/enhancement model based on the recognition of discrete symbols.
By utilizing the synthesis model with the input of discrete symbols, after the prediction of discrete symbol sequence, each target speech could be re-synthesized.
arXiv Detail & Related papers (2021-12-17T08:35:40Z) - Enhancing audio quality for expressive Neural Text-to-Speech [8.199224915764672]
We present a set of techniques that can be leveraged to enhance the signal quality of a highly-expressive voice without the use of additional data.
We show that, when combined, these techniques greatly closed the gap in perceived naturalness between the baseline system and recordings by 39% in terms of MUSHRA scores for an expressive celebrity voice.
arXiv Detail & Related papers (2021-08-13T14:32:39Z) - End-to-End Video-To-Speech Synthesis using Generative Adversarial
Networks [54.43697805589634]
We propose a new end-to-end video-to-speech model based on Generative Adversarial Networks (GANs)
Our model consists of an encoder-decoder architecture that receives raw video as input and generates speech.
We show that this model is able to reconstruct speech with remarkable realism for constrained datasets such as GRID.
arXiv Detail & Related papers (2021-04-27T17:12:30Z) - Pretraining Techniques for Sequence-to-Sequence Voice Conversion [57.65753150356411]
Sequence-to-sequence (seq2seq) voice conversion (VC) models are attractive owing to their ability to convert prosody.
We propose to transfer knowledge from other speech processing tasks where large-scale corpora are easily available, typically text-to-speech (TTS) and automatic speech recognition (ASR)
We argue that VC models with such pretrained ASR or TTS model parameters can generate effective hidden representations for high-fidelity, highly intelligible converted speech.
arXiv Detail & Related papers (2020-08-07T11:02:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.