Attention-based Interactive Disentangling Network for Instance-level
Emotional Voice Conversion
- URL: http://arxiv.org/abs/2312.17508v1
- Date: Fri, 29 Dec 2023 08:06:45 GMT
- Title: Attention-based Interactive Disentangling Network for Instance-level
Emotional Voice Conversion
- Authors: Yun Chen, Lingxiao Yang, Qi Chen, Jian-Huang Lai, Xiaohua Xie
- Abstract summary: Emotional Voice Conversion aims to manipulate a speech according to a given emotion while preserving non-emotion components.
We propose an Attention-based Interactive diseNtangling Network (AINN) that leverages instance-wise emotional knowledge for voice conversion.
- Score: 81.1492897350032
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Emotional Voice Conversion aims to manipulate a speech according to a given
emotion while preserving non-emotion components. Existing approaches cannot
well express fine-grained emotional attributes. In this paper, we propose an
Attention-based Interactive diseNtangling Network (AINN) that leverages
instance-wise emotional knowledge for voice conversion. We introduce a
two-stage pipeline to effectively train our network: Stage I utilizes
inter-speech contrastive learning to model fine-grained emotion and
intra-speech disentanglement learning to better separate emotion and content.
In Stage II, we propose to regularize the conversion with a multi-view
consistency mechanism. This technique helps us transfer fine-grained emotion
and maintain speech content. Extensive experiments show that our AINN
outperforms state-of-the-arts in both objective and subjective metrics.
Related papers
- Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - In-the-wild Speech Emotion Conversion Using Disentangled Self-Supervised
Representations and Neural Vocoder-based Resynthesis [15.16865739526702]
We introduce a methodology that uses self-supervised networks to disentangle the lexical, speaker, and emotional content of the utterance.
We then use a HiFiGAN vocoder to resynthesise the disentangled representations to a speech signal of the targeted emotion.
Results reveal that the proposed approach is aptly conditioned on the emotional content of input speech and is capable of synthesising natural-sounding speech for a target emotion.
arXiv Detail & Related papers (2023-06-02T21:02:51Z) - Nonparallel Emotional Voice Conversion For Unseen Speaker-Emotion Pairs
Using Dual Domain Adversarial Network & Virtual Domain Pairing [9.354935229153787]
We tackle the problem of converting the emotion of speakers whose only neutral data are present during the time of training and testing.
We propose a Virtual Domain Pairing (VDP) training strategy, which virtually incorporates the speaker-emotion pairs that are not present in the real data.
We evaluate the proposed method using a Hindi emotional database.
arXiv Detail & Related papers (2023-02-21T09:06:52Z) - Emotion Intensity and its Control for Emotional Voice Conversion [77.05097999561298]
Emotional voice conversion (EVC) seeks to convert the emotional state of an utterance while preserving the linguistic content and speaker identity.
In this paper, we aim to explicitly characterize and control the intensity of emotion.
We propose to disentangle the speaker style from linguistic content and encode the speaker style into a style embedding in a continuous space that forms the prototype of emotion embedding.
arXiv Detail & Related papers (2022-01-10T02:11:25Z) - Reinforcement Learning for Emotional Text-to-Speech Synthesis with
Improved Emotion Discriminability [82.39099867188547]
Emotional text-to-speech synthesis (ETTS) has seen much progress in recent years.
We propose a new interactive training paradigm for ETTS, denoted as i-ETTS.
We formulate an iterative training strategy with reinforcement learning to ensure the quality of i-ETTS optimization.
arXiv Detail & Related papers (2021-04-03T13:52:47Z) - Limited Data Emotional Voice Conversion Leveraging Text-to-Speech:
Two-stage Sequence-to-Sequence Training [91.95855310211176]
Emotional voice conversion aims to change the emotional state of an utterance while preserving the linguistic content and speaker identity.
We propose a novel 2-stage training strategy for sequence-to-sequence emotional voice conversion with a limited amount of emotional speech data.
The proposed framework can perform both spectrum and prosody conversion and achieves significant improvement over the state-of-the-art baselines in both objective and subjective evaluation.
arXiv Detail & Related papers (2021-03-31T04:56:14Z) - Seen and Unseen emotional style transfer for voice conversion with a new
emotional speech dataset [84.53659233967225]
Emotional voice conversion aims to transform emotional prosody in speech while preserving the linguistic content and speaker identity.
We propose a novel framework based on variational auto-encoding Wasserstein generative adversarial network (VAW-GAN)
We show that the proposed framework achieves remarkable performance by consistently outperforming the baseline framework.
arXiv Detail & Related papers (2020-10-28T07:16:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.