Revisiting the Entropy Semiring for Neural Speech Recognition
- URL: http://arxiv.org/abs/2312.10087v2
- Date: Tue, 19 Dec 2023 01:42:19 GMT
- Title: Revisiting the Entropy Semiring for Neural Speech Recognition
- Authors: Oscar Chang, Dongseong Hwang, Olivier Siohan
- Abstract summary: We show how alignment entropy can be used to supervise models through regularization or distillation.
We also contribute an open-source implementation of CTC and RNN-T in the semiring framework.
- Score: 17.408741279118857
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In streaming settings, speech recognition models have to map sub-sequences of
speech to text before the full audio stream becomes available. However, since
alignment information between speech and text is rarely available during
training, models need to learn it in a completely self-supervised way. In
practice, the exponential number of possible alignments makes this extremely
challenging, with models often learning peaky or sub-optimal alignments. Prima
facie, the exponential nature of the alignment space makes it difficult to even
quantify the uncertainty of a model's alignment distribution. Fortunately, it
has been known for decades that the entropy of a probabilistic finite state
transducer can be computed in time linear to the size of the transducer via a
dynamic programming reduction based on semirings. In this work, we revisit the
entropy semiring for neural speech recognition models, and show how alignment
entropy can be used to supervise models through regularization or distillation.
We also contribute an open-source implementation of CTC and RNN-T in the
semiring framework that includes numerically stable and highly parallel
variants of the entropy semiring. Empirically, we observe that the addition of
alignment distillation improves the accuracy and latency of an already
well-optimized teacher-student distillation model, achieving state-of-the-art
performance on the Librispeech dataset in the streaming scenario.
Related papers
- A high-capacity linguistic steganography based on entropy-driven rank-token mapping [81.29800498695899]
Linguistic steganography enables covert communication through embedding secret messages into innocuous texts.<n>Traditional modification-based methods introduce detectable anomalies, while retrieval-based strategies suffer from low embedding capacity.<n>We propose an entropy-driven framework called RTMStega that integrates rank-based adaptive coding and context-aware decompression with normalized entropy.
arXiv Detail & Related papers (2025-10-27T06:02:47Z) - Text-Trained LLMs Can Zero-Shot Extrapolate PDE Dynamics [10.472535430038759]
Large language models (LLMs) have demonstrated emergent in-context learning (ICL) capabilities across a range of tasks.<n>We show that text-trained foundation models can accurately predict dynamics from discretized partial differential equation (PDE) solutions.<n>We analyze token-level output distributions and uncover a consistent ICL progression: beginning with syntactic pattern imitation, transitioning through an exploratory high-entropy phase, and culminating in confident, numerically grounded predictions.
arXiv Detail & Related papers (2025-09-08T04:08:50Z) - Entropy-based Coarse and Compressed Semantic Speech Representation Learning [72.18542411704347]
We propose an entropy-based dynamic aggregation framework for learning compressed semantic speech representations.<n> Experiments on ASR, speech-to-text translation, and voice conversion tasks demonstrate that the compressed representations perform on par with or better than dense token sequences.
arXiv Detail & Related papers (2025-08-30T13:50:58Z) - Unbiased Sliced Wasserstein Kernels for High-Quality Audio Captioning [55.41070713054046]
We develop the temporal-similarity score by introducing the unbiased sliced Wasserstein RBF kernel.
We also introduce an audio captioning framework based on the unbiased sliced Wasserstein kernel.
arXiv Detail & Related papers (2025-02-08T03:47:06Z) - Inference-Time Alignment in Diffusion Models with Reward-Guided Generation: Tutorial and Review [59.856222854472605]
This tutorial provides an in-depth guide on inference-time guidance and alignment methods for optimizing downstream reward functions in diffusion models.
practical applications in fields such as biology often require sample generation that maximizes specific metrics.
We discuss (1) fine-tuning methods combined with inference-time techniques, (2) inference-time algorithms based on search algorithms such as Monte Carlo tree search, and (3) connections between inference-time algorithms in language models and diffusion models.
arXiv Detail & Related papers (2025-01-16T17:37:35Z) - Enhancing Foundation Models for Time Series Forecasting via Wavelet-based Tokenization [74.3339999119713]
We develop a wavelet-based tokenizer that allows models to learn complex representations directly in the space of time-localized frequencies.
Our method first scales and decomposes the input time series, then thresholds and quantizes the wavelet coefficients, and finally pre-trains an autoregressive model to forecast coefficients for the forecast horizon.
arXiv Detail & Related papers (2024-12-06T18:22:59Z) - Utilizing Neural Transducers for Two-Stage Text-to-Speech via Semantic
Token Prediction [15.72317249204736]
We propose a novel text-to-speech (TTS) framework centered around a neural transducer.
Our approach divides the whole TTS pipeline into semantic-level sequence-to-sequence (seq2seq) modeling and fine-grained acoustic modeling stages.
Our experimental results on zero-shot adaptive TTS demonstrate that our model surpasses the baseline in terms of speech quality and speaker similarity.
arXiv Detail & Related papers (2024-01-03T02:03:36Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Just One Byte (per gradient): A Note on Low-Bandwidth Decentralized
Language Model Finetuning Using Shared Randomness [86.61582747039053]
Language model training in distributed settings is limited by the communication cost of exchanges.
We extend recent work using shared randomness to perform distributed fine-tuning with low bandwidth.
arXiv Detail & Related papers (2023-06-16T17:59:51Z) - Scalable Learning of Latent Language Structure With Logical Offline
Cycle Consistency [71.42261918225773]
Conceptually, LOCCO can be viewed as a form of self-learning where the semantic being trained is used to generate annotations for unlabeled text.
As an added bonus, the annotations produced by LOCCO can be trivially repurposed to train a neural text generation model.
arXiv Detail & Related papers (2023-05-31T16:47:20Z) - Alignment Entropy Regularization [13.904347165738491]
We use entropy to measure a model's uncertainty.
We evaluate the effect of entropy regularization in encouraging the model to distribute the probability mass only on a smaller subset of allowed alignments.
arXiv Detail & Related papers (2022-12-22T18:51:02Z) - Period VITS: Variational Inference with Explicit Pitch Modeling for
End-to-end Emotional Speech Synthesis [19.422230767803246]
We propose Period VITS, a novel end-to-end text-to-speech model that incorporates an explicit periodicity generator.
In the proposed method, we introduce a frame pitch predictor that predicts prosodic features, such as pitch and voicing flags, from the input text.
From these features, the proposed periodicity generator produces a sample-level sinusoidal source that enables the waveform decoder to accurately reproduce the pitch.
arXiv Detail & Related papers (2022-10-28T07:52:30Z) - Robust and Provably Monotonic Networks [0.0]
We present a new method to constrain the Lipschitz constant of dense deep learning models.
We show how the algorithm was used to train a powerful, robust, and interpretable discriminator for heavy-flavor decays in the LHCb realtime data-processing system.
arXiv Detail & Related papers (2021-11-30T19:01:32Z) - Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech [4.348588963853261]
We introduce Grad-TTS, a novel text-to-speech model with score-based decoder producing mel-spectrograms.
The framework of flexible differential equations helps us to generalize conventional diffusion probabilistic models.
Subjective human evaluation shows that Grad-TTS is competitive with state-of-the-art text-to-speech approaches in terms of Mean Opinion Score.
arXiv Detail & Related papers (2021-05-13T14:47:44Z) - Pretraining Techniques for Sequence-to-Sequence Voice Conversion [57.65753150356411]
Sequence-to-sequence (seq2seq) voice conversion (VC) models are attractive owing to their ability to convert prosody.
We propose to transfer knowledge from other speech processing tasks where large-scale corpora are easily available, typically text-to-speech (TTS) and automatic speech recognition (ASR)
We argue that VC models with such pretrained ASR or TTS model parameters can generate effective hidden representations for high-fidelity, highly intelligible converted speech.
arXiv Detail & Related papers (2020-08-07T11:02:07Z) - Real-Time Regression with Dividing Local Gaussian Processes [62.01822866877782]
Local Gaussian processes are a novel, computationally efficient modeling approach based on Gaussian process regression.
Due to an iterative, data-driven division of the input space, they achieve a sublinear computational complexity in the total number of training points in practice.
A numerical evaluation on real-world data sets shows their advantages over other state-of-the-art methods in terms of accuracy as well as prediction and update speed.
arXiv Detail & Related papers (2020-06-16T18:43:31Z) - Improve Variational Autoencoder for Text Generationwith Discrete Latent
Bottleneck [52.08901549360262]
Variational autoencoders (VAEs) are essential tools in end-to-end representation learning.
VAEs tend to ignore latent variables with a strong auto-regressive decoder.
We propose a principled approach to enforce an implicit latent feature matching in a more compact latent space.
arXiv Detail & Related papers (2020-04-22T14:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.