Abstract: Voice cloning is the task of learning to synthesize the voice of an unseen
speaker from a few samples. While current voice cloning methods achieve
promising results in Text-to-Speech (TTS) synthesis for a new voice, these
approaches lack the ability to control the expressiveness of synthesized audio.
In this work, we propose a controllable voice cloning method that allows
fine-grained control over various style aspects of the synthesized speech for
an unseen speaker. We achieve this by explicitly conditioning the speech
synthesis model on a speaker encoding, pitch contour and latent style tokens
during training. Through both quantitative and qualitative evaluations, we show
that our framework can be used for various expressive voice cloning tasks using
only a few transcribed or untranscribed speech samples for a new speaker. These
cloning tasks include style transfer from a reference speech, synthesizing
speech directly from text, and fine-grained style control by manipulating the
style conditioning variables during inference.