Decoupling Speaker-Independent Emotions for Voice Conversion Via
Source-Filter Networks
- URL: http://arxiv.org/abs/2110.01164v1
- Date: Mon, 4 Oct 2021 03:14:48 GMT
- Title: Decoupling Speaker-Independent Emotions for Voice Conversion Via
Source-Filter Networks
- Authors: Zhaojie Luo, Shoufeng Lin, Rui Liu, Jun Baba, Yuichiro Yoshikawa and
Ishiguro Hiroshi
- Abstract summary: We propose a novel Source-Filter-based Emotional VC model (SFEVC) to achieve proper filtering of speaker-independent emotion features.
Our SFEVC model consists of multi-channel encoders, emotion separate encoders, and one decoder.
- Score: 14.55242023708204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Emotional voice conversion (VC) aims to convert a neutral voice to an
emotional (e.g. happy) one while retaining the linguistic information and
speaker identity. We note that the decoupling of emotional features from other
speech information (such as speaker, content, etc.) is the key to achieving
remarkable performance. Some recent attempts about speech representation
decoupling on the neutral speech can not work well on the emotional speech, due
to the more complex acoustic properties involved in the latter. To address this
problem, here we propose a novel Source-Filter-based Emotional VC model (SFEVC)
to achieve proper filtering of speaker-independent emotion features from both
the timbre and pitch features. Our SFEVC model consists of multi-channel
encoders, emotion separate encoders, and one decoder. Note that all encoder
modules adopt a designed information bottlenecks auto-encoder. Additionally, to
further improve the conversion quality for various emotions, a novel two-stage
training strategy based on the 2D Valence-Arousal (VA) space was proposed.
Experimental results show that the proposed SFEVC along with a two-stage
training strategy outperforms all baselines and achieves the state-of-the-art
performance in speaker-independent emotional VC with nonparallel data.
Related papers
- Attention-based Interactive Disentangling Network for Instance-level
Emotional Voice Conversion [81.1492897350032]
Emotional Voice Conversion aims to manipulate a speech according to a given emotion while preserving non-emotion components.
We propose an Attention-based Interactive diseNtangling Network (AINN) that leverages instance-wise emotional knowledge for voice conversion.
arXiv Detail & Related papers (2023-12-29T08:06:45Z) - Nonparallel Emotional Voice Conversion For Unseen Speaker-Emotion Pairs
Using Dual Domain Adversarial Network & Virtual Domain Pairing [9.354935229153787]
We tackle the problem of converting the emotion of speakers whose only neutral data are present during the time of training and testing.
We propose a Virtual Domain Pairing (VDP) training strategy, which virtually incorporates the speaker-emotion pairs that are not present in the real data.
We evaluate the proposed method using a Hindi emotional database.
arXiv Detail & Related papers (2023-02-21T09:06:52Z) - Limited Data Emotional Voice Conversion Leveraging Text-to-Speech:
Two-stage Sequence-to-Sequence Training [91.95855310211176]
Emotional voice conversion aims to change the emotional state of an utterance while preserving the linguistic content and speaker identity.
We propose a novel 2-stage training strategy for sequence-to-sequence emotional voice conversion with a limited amount of emotional speech data.
The proposed framework can perform both spectrum and prosody conversion and achieves significant improvement over the state-of-the-art baselines in both objective and subjective evaluation.
arXiv Detail & Related papers (2021-03-31T04:56:14Z) - VAW-GAN for Disentanglement and Recomposition of Emotional Elements in
Speech [91.92456020841438]
We study the disentanglement and recomposition of emotional elements in speech through variational autoencoding Wasserstein generative adversarial network (VAW-GAN)
We propose a speaker-dependent EVC framework that includes two VAW-GAN pipelines, one for spectrum conversion, and another for prosody conversion.
Experiments validate the effectiveness of our proposed method in both objective and subjective evaluations.
arXiv Detail & Related papers (2020-11-03T08:49:33Z) - Seen and Unseen emotional style transfer for voice conversion with a new
emotional speech dataset [84.53659233967225]
Emotional voice conversion aims to transform emotional prosody in speech while preserving the linguistic content and speaker identity.
We propose a novel framework based on variational auto-encoding Wasserstein generative adversarial network (VAW-GAN)
We show that the proposed framework achieves remarkable performance by consistently outperforming the baseline framework.
arXiv Detail & Related papers (2020-10-28T07:16:18Z) - FragmentVC: Any-to-Any Voice Conversion by End-to-End Extracting and
Fusing Fine-Grained Voice Fragments With Attention [66.77490220410249]
We propose FragmentVC, in which the latent phonetic structure of the utterance from the source speaker is obtained from Wav2Vec 2.0.
FragmentVC is able to extract fine-grained voice fragments from the target speaker utterance(s) and fuse them into the desired utterance.
This approach is trained with reconstruction loss only without any disentanglement considerations between content and speaker information.
arXiv Detail & Related papers (2020-10-27T09:21:03Z) - Converting Anyone's Emotion: Towards Speaker-Independent Emotional Voice
Conversion [83.14445041096523]
Emotional voice conversion aims to convert the emotion of speech from one state to another while preserving the linguistic content and speaker identity.
We propose a speaker-independent emotional voice conversion framework, that can convert anyone's emotion without the need for parallel data.
Experiments show that the proposed speaker-independent framework achieves competitive results for both seen and unseen speakers.
arXiv Detail & Related papers (2020-05-13T13:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.