Phoneme Based Neural Transducer for Large Vocabulary Speech Recognition
- URL: http://arxiv.org/abs/2010.16368v4
- Date: Tue, 20 Apr 2021 13:05:56 GMT
- Title: Phoneme Based Neural Transducer for Large Vocabulary Speech Recognition
- Authors: Wei Zhou and Simon Berger and Ralf Schl\"uter and Hermann Ney
- Abstract summary: We present a simple, novel and competitive approach for phoneme-based neural transducer modeling.
A phonetic context size of one is shown to be sufficient for the best performance.
The overall performance of our best model is comparable to state-of-the-art (SOTA) results for the TED-LIUM Release 2 and Switchboard corpora.
- Score: 41.92991390542083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To join the advantages of classical and end-to-end approaches for speech
recognition, we present a simple, novel and competitive approach for
phoneme-based neural transducer modeling. Different alignment label topologies
are compared and word-end-based phoneme label augmentation is proposed to
improve performance. Utilizing the local dependency of phonemes, we adopt a
simplified neural network structure and a straightforward integration with the
external word-level language model to preserve the consistency of seq-to-seq
modeling. We also present a simple, stable and efficient training procedure
using frame-wise cross-entropy loss. A phonetic context size of one is shown to
be sufficient for the best performance. A simplified scheduled sampling
approach is applied for further improvement and different decoding approaches
are briefly compared. The overall performance of our best model is comparable
to state-of-the-art (SOTA) results for the TED-LIUM Release 2 and Switchboard
corpora.
Related papers
- Improved Contextual Recognition In Automatic Speech Recognition Systems
By Semantic Lattice Rescoring [4.819085609772069]
We propose a novel approach for enhancing contextual recognition within ASR systems via semantic lattice processing.
Our solution consists of using Hidden Markov Models and Gaussian Mixture Models (HMM-GMM) along with Deep Neural Networks (DNN) models for better accuracy.
We demonstrate the effectiveness of our proposed framework on the LibriSpeech dataset with empirical analyses.
arXiv Detail & Related papers (2023-10-14T23:16:05Z) - Improving Audio-Visual Speech Recognition by Lip-Subword Correlation
Based Visual Pre-training and Cross-Modal Fusion Encoder [58.523884148942166]
We propose two novel techniques to improve audio-visual speech recognition (AVSR) under a pre-training and fine-tuning training framework.
First, we explore the correlation between lip shapes and syllable-level subword units in Mandarin to establish good frame-level syllable boundaries from lip shapes.
Next, we propose an audio-guided cross-modal fusion encoder (CMFE) neural network to utilize main training parameters for multiple cross-modal attention layers.
arXiv Detail & Related papers (2023-08-14T08:19:24Z) - Scalable Learning of Latent Language Structure With Logical Offline
Cycle Consistency [71.42261918225773]
Conceptually, LOCCO can be viewed as a form of self-learning where the semantic being trained is used to generate annotations for unlabeled text.
As an added bonus, the annotations produced by LOCCO can be trivially repurposed to train a neural text generation model.
arXiv Detail & Related papers (2023-05-31T16:47:20Z) - Cross-modal Audio-visual Co-learning for Text-independent Speaker
Verification [55.624946113550195]
This paper proposes a cross-modal speech co-learning paradigm.
Two cross-modal boosters are introduced based on an audio-visual pseudo-siamese structure to learn the modality-transformed correlation.
Experimental results on the LRSLip3, GridLip, LomGridLip, and VoxLip datasets demonstrate that our proposed method achieves 60% and 20% average relative performance improvement.
arXiv Detail & Related papers (2023-02-22T10:06:37Z) - LDNet: Unified Listener Dependent Modeling in MOS Prediction for
Synthetic Speech [67.88748572167309]
We present LDNet, a unified framework for mean opinion score (MOS) prediction.
We propose two inference methods that provide more stable results and efficient computation.
arXiv Detail & Related papers (2021-10-18T08:52:31Z) - Any-to-Many Voice Conversion with Location-Relative Sequence-to-Sequence
Modeling [61.351967629600594]
This paper proposes an any-to-many location-relative, sequence-to-sequence (seq2seq), non-parallel voice conversion approach.
In this approach, we combine a bottle-neck feature extractor (BNE) with a seq2seq synthesis module.
Objective and subjective evaluations show that the proposed any-to-many approach has superior voice conversion performance in terms of both naturalness and speaker similarity.
arXiv Detail & Related papers (2020-09-06T13:01:06Z) - Context-Dependent Acoustic Modeling without Explicit Phone Clustering [45.07737874541304]
Phoneme-based acoustic modeling of large vocabulary automatic speech recognition takes advantage of phoneme context.
In this work, we address a direct phonetic context modeling for the hybrid deep neural network (DNN)/HMM.
By performing different decompositions of the joint probability of the center phoneme state and its left and right contexts, we obtain a factorized network consisting of different components.
arXiv Detail & Related papers (2020-05-15T14:45:32Z) - Statistical Context-Dependent Units Boundary Correction for Corpus-based
Unit-Selection Text-to-Speech [1.4337588659482519]
We present an innovative technique for speaker adaptation in order to improve the accuracy of segmentation with application to unit-selection Text-To-Speech (TTS) systems.
Unlike conventional techniques for speaker adaptation, we aim to use only context dependent characteristics extrapolated with linguistic analysis techniques.
arXiv Detail & Related papers (2020-03-05T12:42:13Z) - Phoneme Boundary Detection using Learnable Segmental Features [31.203969460341817]
Phoneme boundary detection plays an essential first step for a variety of speech processing applications.
We propose a neural architecture coupled with a parameterized structured loss function to learn segmental representations for the task of phoneme boundary detection.
arXiv Detail & Related papers (2020-02-11T14:03:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.