Closing the Gap between Single-User and Multi-User VoiceFilter-Lite
- URL: http://arxiv.org/abs/2202.12169v1
- Date: Thu, 24 Feb 2022 16:10:16 GMT
- Title: Closing the Gap between Single-User and Multi-User VoiceFilter-Lite
- Authors: Rajeev Rikhye, Quan Wang, Qiao Liang, Yanzhang He, Ian McGraw
- Abstract summary: VoiceFilter-Lite is a speaker-conditioned voice separation model.
It plays a crucial role in improving speech recognition and speaker verification by suppressing overlapping speech from non-target speakers.
In this paper, we devised a series of experiments to improve the multi-user VoiceFilter-Lite model.
We successfully closed the performance gap between multi-user and single-user VoiceFilter-Lite models on single-speaker evaluations.
- Score: 13.593557171761782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: VoiceFilter-Lite is a speaker-conditioned voice separation model that plays a
crucial role in improving speech recognition and speaker verification by
suppressing overlapping speech from non-target speakers. However, one
limitation of VoiceFilter-Lite, and other speaker-conditioned speech models in
general, is that these models are usually limited to a single target speaker.
This is undesirable as most smart home devices now support multiple enrolled
users. In order to extend the benefits of personalization to multiple users, we
previously developed an attention-based speaker selection mechanism and applied
it to VoiceFilter-Lite. However, the original multi-user VoiceFilter-Lite model
suffers from significant performance degradation compared with single-user
models. In this paper, we devised a series of experiments to improve the
multi-user VoiceFilter-Lite model. By incorporating a dual learning rate
schedule and by using feature-wise linear modulation (FiLM) to condition the
model with the attended speaker embedding, we successfully closed the
performance gap between multi-user and single-user VoiceFilter-Lite models on
single-speaker evaluations. At the same time, the new model can also be easily
extended to support any number of users, and significantly outperforms our
previously published model on multi-speaker evaluations.
Related papers
- SelectTTS: Synthesizing Anyone's Voice via Discrete Unit-Based Frame Selection [7.6732312922460055]
We propose SelectTTS, a novel method to select the appropriate frames from the target speaker and decode using frame-level self-supervised learning (SSL) features.
We show that this approach can effectively capture speaker characteristics for unseen speakers, and achieves comparable results to other multi-speaker text-to-speech frameworks in both objective and subjective metrics.
arXiv Detail & Related papers (2024-08-30T17:34:46Z) - Many-to-Many Voice Conversion based Feature Disentanglement using
Variational Autoencoder [2.4975981795360847]
We propose a new method based on feature disentanglement to tackle many to many voice conversion.
The method has the capability to disentangle speaker identity and linguistic content from utterances.
It can convert from many source speakers to many target speakers with a single autoencoder network.
arXiv Detail & Related papers (2021-07-11T13:31:16Z) - Multi-user VoiceFilter-Lite via Attentive Speaker Embedding [11.321747759474164]
We propose a solution to allow speaker conditioned speech models to support an arbitrary number of enrolled users in a single pass.
This is achieved by using an attention mechanism on multiple speaker embeddings to compute a single attentive embedding.
With up to four enrolled users, multi-user VoiceFilter-Lite is able to significantly reduce speech recognition and speaker verification errors.
arXiv Detail & Related papers (2021-07-02T17:45:37Z) - GANSpeech: Adversarial Training for High-Fidelity Multi-Speaker Speech
Synthesis [6.632254395574993]
GANSpeech is a high-fidelity multi-speaker TTS model that adopts the adversarial training method to a non-autoregressive multi-speaker TTS model.
In the subjective listening tests, GANSpeech significantly outperformed the baseline multi-speaker FastSpeech and FastSpeech2 models.
arXiv Detail & Related papers (2021-06-29T08:15:30Z) - Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation [63.561944239071615]
StyleSpeech is a new TTS model which synthesizes high-quality speech and adapts to new speakers.
With SALN, our model effectively synthesizes speech in the style of the target speaker even from single speech audio.
We extend it to Meta-StyleSpeech by introducing two discriminators trained with style prototypes, and performing episodic training.
arXiv Detail & Related papers (2021-06-06T15:34:11Z) - Investigating on Incorporating Pretrained and Learnable Speaker
Representations for Multi-Speaker Multi-Style Text-to-Speech [54.75722224061665]
In this work, we investigate different speaker representations and proposed to integrate pretrained and learnable speaker representations.
The FastSpeech 2 model combined with both pretrained and learnable speaker representations shows great generalization ability on few-shot speakers.
arXiv Detail & Related papers (2021-03-06T10:14:33Z) - Voice Cloning: a Multi-Speaker Text-to-Speech Synthesis Approach based
on Transfer Learning [0.802904964931021]
The proposed approach has the goal to overcome these limitations trying to obtain a system which is able to model a multi-speaker acoustic space.
This allows the generation of speech audio similar to the voice of different target speakers, even if they were not observed during the training phase.
arXiv Detail & Related papers (2021-02-10T18:43:56Z) - VoiceFilter-Lite: Streaming Targeted Voice Separation for On-Device
Speech Recognition [60.462770498366524]
We introduce VoiceFilter-Lite, a single-channel source separation model that runs on the device to preserve only the speech signals from a target user.
We show that such a model can be quantized as a 8-bit integer model and run in realtime.
arXiv Detail & Related papers (2020-09-09T14:26:56Z) - Audio ALBERT: A Lite BERT for Self-supervised Learning of Audio
Representation [51.37980448183019]
We propose Audio ALBERT, a lite version of the self-supervised speech representation model.
We show that Audio ALBERT is capable of achieving competitive performance with those huge models in the downstream tasks.
In probing experiments, we find that the latent representations encode richer information of both phoneme and speaker than that of the last layer.
arXiv Detail & Related papers (2020-05-18T10:42:44Z) - Meta-Learning for Short Utterance Speaker Recognition with Imbalance
Length Pairs [65.28795726837386]
We introduce a meta-learning framework for imbalance length pairs.
We train it with a support set of long utterances and a query set of short utterances of varying lengths.
By combining these two learning schemes, our model outperforms existing state-of-the-art speaker verification models.
arXiv Detail & Related papers (2020-04-06T17:53:14Z) - Voice Separation with an Unknown Number of Multiple Speakers [113.91855071999298]
We present a new method for separating a mixed audio sequence, in which multiple voices speak simultaneously.
The new method employs gated neural networks that are trained to separate the voices at multiple processing steps, while maintaining the speaker in each output channel fixed.
arXiv Detail & Related papers (2020-02-29T20:02:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.