Improving Device Directedness Classification of Utterances with Semantic
Lexical Features
- URL: http://arxiv.org/abs/2010.01949v1
- Date: Tue, 29 Sep 2020 20:13:58 GMT
- Title: Improving Device Directedness Classification of Utterances with Semantic
Lexical Features
- Authors: Kellen Gillespie, Ioannis C. Konstantakopoulos, Xingzhi Guo, Vishal
Thanvantri Vasudevan, Abhinav Sethy
- Abstract summary: We propose a directedness classifier that combines semantic lexical features with a lightweight acoustic feature.
The mixed-domain lexical and acoustic feature model is able to achieve 14% relative reduction of EER over a state-of-the-art acoustic-only baseline model.
- Score: 9.43023411463796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: User interactions with personal assistants like Alexa, Google Home and Siri
are typically initiated by a wake term or wakeword. Several personal assistants
feature "follow-up" modes that allow users to make additional interactions
without the need of a wakeword. For the system to only respond when
appropriate, and to ignore speech not intended for it, utterances must be
classified as device-directed or non-device-directed. State-of-the-art systems
have largely used acoustic features for this task, while others have used only
lexical features or have added LM-based lexical features. We propose a
directedness classifier that combines semantic lexical features with a
lightweight acoustic feature and show it is effective in classifying
directedness. The mixed-domain lexical and acoustic feature model is able to
achieve 14% relative reduction of EER over a state-of-the-art acoustic-only
baseline model. Finally, we successfully apply transfer learning and
semi-supervised learning to the model to improve accuracy even further.
Related papers
- MSA-ASR: Efficient Multilingual Speaker Attribution with frozen ASR Models [59.80042864360884]
Speaker-attributed automatic speech recognition (SA-ASR) aims to transcribe speech while assigning transcripts to the corresponding speakers accurately.
This paper introduces a novel approach, leveraging a frozen multilingual ASR model to incorporate speaker attribution into the transcriptions.
arXiv Detail & Related papers (2024-11-27T09:01:08Z) - Explaining Deep Learning Embeddings for Speech Emotion Recognition by Predicting Interpretable Acoustic Features [5.678610585849838]
Pre-trained deep learning embeddings have consistently shown superior performance over handcrafted acoustic features in speech emotion recognition.
Unlike acoustic features with clear physical meaning, these embeddings lack clear interpretability.
This paper proposes a modified probing approach to explain deep learning embeddings in the speech emotion space.
arXiv Detail & Related papers (2024-09-14T19:18:56Z) - Zero-shot text-to-speech synthesis conditioned using self-supervised
speech representation model [13.572330725278066]
A novel point of the proposed method is the direct use of the SSL model to obtain embedding vectors from speech representations trained with a large amount of data.
The disentangled embeddings will enable us to achieve better reproduction performance for unseen speakers and rhythm transfer conditioned by different speeches.
arXiv Detail & Related papers (2023-04-24T10:15:58Z) - LipLearner: Customizable Silent Speech Interactions on Mobile Devices [15.445920726854595]
We leverage contrastive learning to learn efficient lipreading representations, enabling few-shot command customization with minimal user effort.
Our model exhibits high robustness to different lighting, posture, and gesture conditions on an in-the-wild dataset.
A user study demonstrated that with LipLearner, users could define their own commands with high reliability guaranteed by an online incremental learning scheme.
arXiv Detail & Related papers (2023-02-12T13:10:57Z) - Self supervised learning for robust voice cloning [3.7989740031754806]
We use features learned in a self-supervised framework to produce high quality speech representations.
The learned features are used as pre-trained utterance-level embeddings and as inputs to a Non-Attentive Tacotron based architecture.
This method enables us to train our model in an unlabeled multispeaker dataset as well as use unseen speaker embeddings to copy a speaker's voice.
arXiv Detail & Related papers (2022-04-07T13:05:24Z) - Exploiting Cross Domain Acoustic-to-articulatory Inverted Features For
Disordered Speech Recognition [57.15942628305797]
Articulatory features are invariant to acoustic signal distortion and have been successfully incorporated into automatic speech recognition systems for normal speech.
This paper presents a cross-domain acoustic-to-articulatory (A2A) inversion approach that utilizes the parallel acoustic-articulatory data of the 15-hour TORGO corpus in model training.
Cross-domain adapted to the 102.7-hour UASpeech corpus and to produce articulatory features.
arXiv Detail & Related papers (2022-03-19T08:47:18Z) - Speaker Adaption with Intuitive Prosodic Features for Statistical
Parametric Speech Synthesis [50.5027550591763]
We propose a method of speaker adaption with intuitive prosodic features for statistical parametric speech synthesis.
The intuitive prosodic features are extracted at utterance-level or speaker-level, and are further integrated into the existing speaker-encoding-based and speaker-embedding-based adaptation frameworks respectively.
arXiv Detail & Related papers (2022-03-02T09:00:31Z) - Preliminary study on using vector quantization latent spaces for TTS/VC
systems with consistent performance [55.10864476206503]
We investigate the use of quantized vectors to model the latent linguistic embedding.
By enforcing different policies over the latent spaces in the training, we are able to obtain a latent linguistic embedding.
Our experiments show that the voice cloning system built with vector quantization has only a small degradation in terms of perceptive evaluations.
arXiv Detail & Related papers (2021-06-25T07:51:35Z) - Leveraging Acoustic and Linguistic Embeddings from Pretrained speech and
language Models for Intent Classification [81.80311855996584]
We propose a novel intent classification framework that employs acoustic features extracted from a pretrained speech recognition system and linguistic features learned from a pretrained language model.
We achieve 90.86% and 99.07% accuracy on ATIS and Fluent speech corpus, respectively.
arXiv Detail & Related papers (2021-02-15T07:20:06Z) - Audio Impairment Recognition Using a Correlation-Based Feature
Representation [85.08880949780894]
We propose a new representation of hand-crafted features that is based on the correlation of feature pairs.
We show superior performance in terms of compact feature dimensionality and improved computational speed in the test stage.
arXiv Detail & Related papers (2020-03-22T13:34:37Z) - Speech Enhancement using Self-Adaptation and Multi-Head Self-Attention [70.82604384963679]
This paper investigates a self-adaptation method for speech enhancement using auxiliary speaker-aware features.
We extract a speaker representation used for adaptation directly from the test utterance.
arXiv Detail & Related papers (2020-02-14T05:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.