In-the-wild Speech Emotion Conversion Using Disentangled Self-Supervised
Representations and Neural Vocoder-based Resynthesis
- URL: http://arxiv.org/abs/2306.01916v1
- Date: Fri, 2 Jun 2023 21:02:51 GMT
- Title: In-the-wild Speech Emotion Conversion Using Disentangled Self-Supervised
Representations and Neural Vocoder-based Resynthesis
- Authors: Navin Raj Prabhu, Nale Lehmann-Willenbrock and Timo Gerkmann
- Abstract summary: We introduce a methodology that uses self-supervised networks to disentangle the lexical, speaker, and emotional content of the utterance.
We then use a HiFiGAN vocoder to resynthesise the disentangled representations to a speech signal of the targeted emotion.
Results reveal that the proposed approach is aptly conditioned on the emotional content of input speech and is capable of synthesising natural-sounding speech for a target emotion.
- Score: 15.16865739526702
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Speech emotion conversion aims to convert the expressed emotion of a spoken
utterance to a target emotion while preserving the lexical information and the
speaker's identity. In this work, we specifically focus on in-the-wild emotion
conversion where parallel data does not exist, and the problem of disentangling
lexical, speaker, and emotion information arises. In this paper, we introduce a
methodology that uses self-supervised networks to disentangle the lexical,
speaker, and emotional content of the utterance, and subsequently uses a
HiFiGAN vocoder to resynthesise the disentangled representations to a speech
signal of the targeted emotion. For better representation and to achieve
emotion intensity control, we specifically focus on the aro\-usal dimension of
continuous representations, as opposed to performing emotion conversion on
categorical representations. We test our methodology on the large in-the-wild
MSP-Podcast dataset. Results reveal that the proposed approach is aptly
conditioned on the emotional content of input speech and is capable of
synthesising natural-sounding speech for a target emotion. Results further
reveal that the methodology better synthesises speech for mid-scale arousal (2
to 6) than for extreme arousal (1 and 7).
Related papers
- Attention-based Interactive Disentangling Network for Instance-level
Emotional Voice Conversion [81.1492897350032]
Emotional Voice Conversion aims to manipulate a speech according to a given emotion while preserving non-emotion components.
We propose an Attention-based Interactive diseNtangling Network (AINN) that leverages instance-wise emotional knowledge for voice conversion.
arXiv Detail & Related papers (2023-12-29T08:06:45Z) - Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - AffectEcho: Speaker Independent and Language-Agnostic Emotion and Affect
Transfer for Speech Synthesis [13.918119853846838]
Affect is an emotional characteristic encompassing valence, arousal, and intensity, and is a crucial attribute for enabling authentic conversations.
We propose AffectEcho, an emotion translation model, that uses a Vector Quantized codebook to model emotions within a quantized space.
We demonstrate the effectiveness of our approach in controlling the emotions of generated speech while preserving identity, style, and emotional cadence unique to each speaker.
arXiv Detail & Related papers (2023-08-16T06:28:29Z) - Emotion Intensity and its Control for Emotional Voice Conversion [77.05097999561298]
Emotional voice conversion (EVC) seeks to convert the emotional state of an utterance while preserving the linguistic content and speaker identity.
In this paper, we aim to explicitly characterize and control the intensity of emotion.
We propose to disentangle the speaker style from linguistic content and encode the speaker style into a style embedding in a continuous space that forms the prototype of emotion embedding.
arXiv Detail & Related papers (2022-01-10T02:11:25Z) - Textless Speech Emotion Conversion using Decomposed and Discrete
Representations [49.55101900501656]
We decompose speech into discrete and disentangled learned representations, consisting of content units, F0, speaker, and emotion.
First, we modify the speech content by translating the content units to a target emotion, and then predict the prosodic features based on these units.
Finally, the speech waveform is generated by feeding the predicted representations into a neural vocoder.
arXiv Detail & Related papers (2021-11-14T18:16:42Z) - Seen and Unseen emotional style transfer for voice conversion with a new
emotional speech dataset [84.53659233967225]
Emotional voice conversion aims to transform emotional prosody in speech while preserving the linguistic content and speaker identity.
We propose a novel framework based on variational auto-encoding Wasserstein generative adversarial network (VAW-GAN)
We show that the proposed framework achieves remarkable performance by consistently outperforming the baseline framework.
arXiv Detail & Related papers (2020-10-28T07:16:18Z) - Converting Anyone's Emotion: Towards Speaker-Independent Emotional Voice
Conversion [83.14445041096523]
Emotional voice conversion aims to convert the emotion of speech from one state to another while preserving the linguistic content and speaker identity.
We propose a speaker-independent emotional voice conversion framework, that can convert anyone's emotion without the need for parallel data.
Experiments show that the proposed speaker-independent framework achieves competitive results for both seen and unseen speakers.
arXiv Detail & Related papers (2020-05-13T13:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.