Emotional Voice Conversion: Theory, Databases and ESD
- URL: http://arxiv.org/abs/2105.14762v1
- Date: Mon, 31 May 2021 07:48:56 GMT
- Title: Emotional Voice Conversion: Theory, Databases and ESD
- Authors: Kun Zhou, Berrak Sisman, Rui Liu, Haizhou Li
- Abstract summary: We motivate the development of a novel emotional speech database ( ESD)
The ESD database consists of 350 parallel utterances spoken by 10 native English and 10 native Chinese speakers.
The database is suitable for multi-speaker and cross-lingual emotional voice conversion studies.
- Score: 84.62083515557886
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we first provide a review of the state-of-the-art emotional
voice conversion research, and the existing emotional speech databases. We then
motivate the development of a novel emotional speech database (ESD) that
addresses the increasing research need. With this paper, the ESD database is
now made available to the research community. The ESD database consists of 350
parallel utterances spoken by 10 native English and 10 native Chinese speakers
and covers 5 emotion categories (neutral, happy, angry, sad and surprise). More
than 29 hours of speech data were recorded in a controlled acoustic
environment. The database is suitable for multi-speaker and cross-lingual
emotional voice conversion studies. As case studies, we implement several
state-of-the-art emotional voice conversion systems on the ESD database. This
paper provides a reference study on ESD in conjunction with its release.
Related papers
- nEMO: Dataset of Emotional Speech in Polish [0.0]
nEMO is a novel corpus of emotional speech in Polish.
The dataset comprises over 3 hours of samples recorded with the participation of nine actors portraying six emotional states.
The text material used was carefully selected to represent the phonetics of the Polish language adequately.
arXiv Detail & Related papers (2024-04-09T13:18:52Z) - EMOVOME Database: Advancing Emotion Recognition in Speech Beyond Staged Scenarios [2.1455880234227624]
We released the Emotional Voice Messages (EMOVOME) database, including 999 voice messages from real conversations of 100 Spanish speakers on a messaging app.
We evaluated speaker-independent Speech Emotion Recognition (SER) models using a standard set of acoustic features and transformer-based models.
EMOVOME outcomes varied with annotator labels, showing better results and fairness when combining expert and non-expert annotations.
arXiv Detail & Related papers (2024-03-04T16:13:39Z) - Speech and Text-Based Emotion Recognizer [0.9168634432094885]
We build a balanced corpus from publicly available datasets for speech emotion recognition.
Our best system, a multi-modal speech, and text-based model, provides a performance of UA(Unweighed Accuracy) + WA (Weighed Accuracy) of 157.57 compared to the baseline algorithm performance of 119.66.
arXiv Detail & Related papers (2023-12-10T05:17:39Z) - EMOVIE: A Mandarin Emotion Speech Dataset with a Simple Emotional
Text-to-Speech Model [56.75775793011719]
We introduce and publicly release a Mandarin emotion speech dataset including 9,724 samples with audio files and its emotion human-labeled annotation.
Unlike those models which need additional reference audio as input, our model could predict emotion labels just from the input text and generate more expressive speech conditioned on the emotion embedding.
In the experiment phase, we first validate the effectiveness of our dataset by an emotion classification task. Then we train our model on the proposed dataset and conduct a series of subjective evaluations.
arXiv Detail & Related papers (2021-06-17T08:34:21Z) - Reinforcement Learning for Emotional Text-to-Speech Synthesis with
Improved Emotion Discriminability [82.39099867188547]
Emotional text-to-speech synthesis (ETTS) has seen much progress in recent years.
We propose a new interactive training paradigm for ETTS, denoted as i-ETTS.
We formulate an iterative training strategy with reinforcement learning to ensure the quality of i-ETTS optimization.
arXiv Detail & Related papers (2021-04-03T13:52:47Z) - Limited Data Emotional Voice Conversion Leveraging Text-to-Speech:
Two-stage Sequence-to-Sequence Training [91.95855310211176]
Emotional voice conversion aims to change the emotional state of an utterance while preserving the linguistic content and speaker identity.
We propose a novel 2-stage training strategy for sequence-to-sequence emotional voice conversion with a limited amount of emotional speech data.
The proposed framework can perform both spectrum and prosody conversion and achieves significant improvement over the state-of-the-art baselines in both objective and subjective evaluation.
arXiv Detail & Related papers (2021-03-31T04:56:14Z) - Seen and Unseen emotional style transfer for voice conversion with a new
emotional speech dataset [84.53659233967225]
Emotional voice conversion aims to transform emotional prosody in speech while preserving the linguistic content and speaker identity.
We propose a novel framework based on variational auto-encoding Wasserstein generative adversarial network (VAW-GAN)
We show that the proposed framework achieves remarkable performance by consistently outperforming the baseline framework.
arXiv Detail & Related papers (2020-10-28T07:16:18Z) - Converting Anyone's Emotion: Towards Speaker-Independent Emotional Voice
Conversion [83.14445041096523]
Emotional voice conversion aims to convert the emotion of speech from one state to another while preserving the linguistic content and speaker identity.
We propose a speaker-independent emotional voice conversion framework, that can convert anyone's emotion without the need for parallel data.
Experiments show that the proposed speaker-independent framework achieves competitive results for both seen and unseen speakers.
arXiv Detail & Related papers (2020-05-13T13:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.