DurFlex-EVC: Duration-Flexible Emotional Voice Conversion with Parallel
Generation
- URL: http://arxiv.org/abs/2401.08095v2
- Date: Thu, 7 Mar 2024 08:40:01 GMT
- Title: DurFlex-EVC: Duration-Flexible Emotional Voice Conversion with Parallel
Generation
- Authors: Hyung-Seok Oh, Sang-Hoon Lee, Deok-Hyeon Cho, Seong-Whan Lee
- Abstract summary: Emotional voice conversion (EVC) seeks to modify the emotional tone of a speaker's voice.
Recent advancements in EVC have involved the simultaneous modeling of pitch and duration.
This study shifts focus towards parallel speech generation.
- Score: 37.35829410807451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Emotional voice conversion (EVC) seeks to modify the emotional tone of a
speaker's voice while preserving the original linguistic content and the
speaker's unique vocal characteristics. Recent advancements in EVC have
involved the simultaneous modeling of pitch and duration, utilizing the
potential of sequence-to-sequence (seq2seq) models. To enhance reliability and
efficiency in conversion, this study shifts focus towards parallel speech
generation. We introduce Duration-Flexible EVC (DurFlex-EVC), which integrates
a style autoencoder and unit aligner. Traditional models, while incorporating
self-supervised learning (SSL) representations that contain both linguistic and
paralinguistic information, have neglected this dual nature, leading to reduced
controllability. Addressing this issue, we implement cross-attention to
synchronize these representations with various emotions. Additionally, a style
autoencoder is developed for the disentanglement and manipulation of style
elements. The efficacy of our approach is validated through both subjective and
objective evaluations, establishing its superiority over existing models in the
field.
Related papers
- Non-autoregressive real-time Accent Conversion model with voice cloning [0.0]
We have developed a non-autoregressive model for real-time accent conversion with voice cloning.
The model generates native-sounding L1 speech with minimal latency based on input L2 speech.
The model has the ability to save, clone and change the timbre, gender and accent of the speaker's voice in real time.
arXiv Detail & Related papers (2024-05-21T19:07:26Z) - FaceChain-ImagineID: Freely Crafting High-Fidelity Diverse Talking Faces from Disentangled Audio [45.71036380866305]
We abstract the process of people hearing speech, extracting meaningful cues, and creating dynamically audio-consistent talking faces from a single audio.
Specifically, it involves two critical challenges: one is to effectively decouple identity, content, and emotion from entangled audio, and the other is to maintain intra-video diversity and inter-video consistency.
We introduce the Controllable Coherent Frame generation, which involves the flexible integration of three trainable adapters with frozen Latent Diffusion Models.
arXiv Detail & Related papers (2024-03-04T09:59:48Z) - Decoupling Speaker-Independent Emotions for Voice Conversion Via
Source-Filter Networks [14.55242023708204]
We propose a novel Source-Filter-based Emotional VC model (SFEVC) to achieve proper filtering of speaker-independent emotion features.
Our SFEVC model consists of multi-channel encoders, emotion separate encoders, and one decoder.
arXiv Detail & Related papers (2021-10-04T03:14:48Z) - An Improved StarGAN for Emotional Voice Conversion: Enhancing Voice
Quality and Data Augmentation [8.017817904347964]
We propose a novel StarGAN framework along with a two-stage training process that separates emotional features from those independent of emotion.
The proposed model achieves favourable results in both the objective evaluation and the subjective evaluation in terms of distortion.
In data augmentation experiments for end-to-end speech emotion recognition, the proposed StarGAN model achieves an increase of 2% in Micro-F1 and 5% in Macro-F1.
arXiv Detail & Related papers (2021-07-18T04:28:47Z) - VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised
Speech Representation Disentanglement for One-shot Voice Conversion [54.29557210925752]
One-shot voice conversion can be effectively achieved by speech representation disentanglement.
We employ vector quantization (VQ) for content encoding and introduce mutual information (MI) as the correlation metric during training.
Experimental results reflect the superiority of the proposed method in learning effective disentangled speech representations.
arXiv Detail & Related papers (2021-06-18T13:50:38Z) - Limited Data Emotional Voice Conversion Leveraging Text-to-Speech:
Two-stage Sequence-to-Sequence Training [91.95855310211176]
Emotional voice conversion aims to change the emotional state of an utterance while preserving the linguistic content and speaker identity.
We propose a novel 2-stage training strategy for sequence-to-sequence emotional voice conversion with a limited amount of emotional speech data.
The proposed framework can perform both spectrum and prosody conversion and achieves significant improvement over the state-of-the-art baselines in both objective and subjective evaluation.
arXiv Detail & Related papers (2021-03-31T04:56:14Z) - VAW-GAN for Disentanglement and Recomposition of Emotional Elements in
Speech [91.92456020841438]
We study the disentanglement and recomposition of emotional elements in speech through variational autoencoding Wasserstein generative adversarial network (VAW-GAN)
We propose a speaker-dependent EVC framework that includes two VAW-GAN pipelines, one for spectrum conversion, and another for prosody conversion.
Experiments validate the effectiveness of our proposed method in both objective and subjective evaluations.
arXiv Detail & Related papers (2020-11-03T08:49:33Z) - Seen and Unseen emotional style transfer for voice conversion with a new
emotional speech dataset [84.53659233967225]
Emotional voice conversion aims to transform emotional prosody in speech while preserving the linguistic content and speaker identity.
We propose a novel framework based on variational auto-encoding Wasserstein generative adversarial network (VAW-GAN)
We show that the proposed framework achieves remarkable performance by consistently outperforming the baseline framework.
arXiv Detail & Related papers (2020-10-28T07:16:18Z) - Any-to-Many Voice Conversion with Location-Relative Sequence-to-Sequence
Modeling [61.351967629600594]
This paper proposes an any-to-many location-relative, sequence-to-sequence (seq2seq), non-parallel voice conversion approach.
In this approach, we combine a bottle-neck feature extractor (BNE) with a seq2seq synthesis module.
Objective and subjective evaluations show that the proposed any-to-many approach has superior voice conversion performance in terms of both naturalness and speaker similarity.
arXiv Detail & Related papers (2020-09-06T13:01:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.