Nonparallel Emotional Voice Conversion For Unseen Speaker-Emotion Pairs
Using Dual Domain Adversarial Network & Virtual Domain Pairing
- URL: http://arxiv.org/abs/2302.10536v1
- Date: Tue, 21 Feb 2023 09:06:52 GMT
- Title: Nonparallel Emotional Voice Conversion For Unseen Speaker-Emotion Pairs
Using Dual Domain Adversarial Network & Virtual Domain Pairing
- Authors: Nirmesh Shah, Mayank Kumar Singh, Naoya Takahashi, Naoyuki Onoe
- Abstract summary: We tackle the problem of converting the emotion of speakers whose only neutral data are present during the time of training and testing.
We propose a Virtual Domain Pairing (VDP) training strategy, which virtually incorporates the speaker-emotion pairs that are not present in the real data.
We evaluate the proposed method using a Hindi emotional database.
- Score: 9.354935229153787
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Primary goal of an emotional voice conversion (EVC) system is to convert the
emotion of a given speech signal from one style to another style without
modifying the linguistic content of the signal. Most of the state-of-the-art
approaches convert emotions for seen speaker-emotion combinations only. In this
paper, we tackle the problem of converting the emotion of speakers whose only
neutral data are present during the time of training and testing (i.e., unseen
speaker-emotion combinations). To this end, we extend a recently proposed
StartGANv2-VC architecture by utilizing dual encoders for learning the speaker
and emotion style embeddings separately along with dual domain source
classifiers. For achieving the conversion to unseen speaker-emotion
combinations, we propose a Virtual Domain Pairing (VDP) training strategy,
which virtually incorporates the speaker-emotion pairs that are not present in
the real data without compromising the min-max game of a discriminator and
generator in adversarial training. We evaluate the proposed method using a
Hindi emotional database.
Related papers
- Attention-based Interactive Disentangling Network for Instance-level
Emotional Voice Conversion [81.1492897350032]
Emotional Voice Conversion aims to manipulate a speech according to a given emotion while preserving non-emotion components.
We propose an Attention-based Interactive diseNtangling Network (AINN) that leverages instance-wise emotional knowledge for voice conversion.
arXiv Detail & Related papers (2023-12-29T08:06:45Z) - Textless Speech Emotion Conversion using Decomposed and Discrete
Representations [49.55101900501656]
We decompose speech into discrete and disentangled learned representations, consisting of content units, F0, speaker, and emotion.
First, we modify the speech content by translating the content units to a target emotion, and then predict the prosodic features based on these units.
Finally, the speech waveform is generated by feeding the predicted representations into a neural vocoder.
arXiv Detail & Related papers (2021-11-14T18:16:42Z) - Decoupling Speaker-Independent Emotions for Voice Conversion Via
Source-Filter Networks [14.55242023708204]
We propose a novel Source-Filter-based Emotional VC model (SFEVC) to achieve proper filtering of speaker-independent emotion features.
Our SFEVC model consists of multi-channel encoders, emotion separate encoders, and one decoder.
arXiv Detail & Related papers (2021-10-04T03:14:48Z) - Multimodal Emotion Recognition with High-level Speech and Text Features [8.141157362639182]
We propose a novel cross-representation speech model to perform emotion recognition on wav2vec 2.0 speech features.
We also train a CNN-based model to recognize emotions from text features extracted with Transformer-based models.
Our method is evaluated on the IEMOCAP dataset in a 4-class classification problem.
arXiv Detail & Related papers (2021-09-29T07:08:40Z) - Limited Data Emotional Voice Conversion Leveraging Text-to-Speech:
Two-stage Sequence-to-Sequence Training [91.95855310211176]
Emotional voice conversion aims to change the emotional state of an utterance while preserving the linguistic content and speaker identity.
We propose a novel 2-stage training strategy for sequence-to-sequence emotional voice conversion with a limited amount of emotional speech data.
The proposed framework can perform both spectrum and prosody conversion and achieves significant improvement over the state-of-the-art baselines in both objective and subjective evaluation.
arXiv Detail & Related papers (2021-03-31T04:56:14Z) - VAW-GAN for Disentanglement and Recomposition of Emotional Elements in
Speech [91.92456020841438]
We study the disentanglement and recomposition of emotional elements in speech through variational autoencoding Wasserstein generative adversarial network (VAW-GAN)
We propose a speaker-dependent EVC framework that includes two VAW-GAN pipelines, one for spectrum conversion, and another for prosody conversion.
Experiments validate the effectiveness of our proposed method in both objective and subjective evaluations.
arXiv Detail & Related papers (2020-11-03T08:49:33Z) - Seen and Unseen emotional style transfer for voice conversion with a new
emotional speech dataset [84.53659233967225]
Emotional voice conversion aims to transform emotional prosody in speech while preserving the linguistic content and speaker identity.
We propose a novel framework based on variational auto-encoding Wasserstein generative adversarial network (VAW-GAN)
We show that the proposed framework achieves remarkable performance by consistently outperforming the baseline framework.
arXiv Detail & Related papers (2020-10-28T07:16:18Z) - Converting Anyone's Emotion: Towards Speaker-Independent Emotional Voice
Conversion [83.14445041096523]
Emotional voice conversion aims to convert the emotion of speech from one state to another while preserving the linguistic content and speaker identity.
We propose a speaker-independent emotional voice conversion framework, that can convert anyone's emotion without the need for parallel data.
Experiments show that the proposed speaker-independent framework achieves competitive results for both seen and unseen speakers.
arXiv Detail & Related papers (2020-05-13T13:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.