Efficient MDI Adaptation for n-gram Language Models
- URL: http://arxiv.org/abs/2008.02385v1
- Date: Wed, 5 Aug 2020 22:21:03 GMT
- Title: Efficient MDI Adaptation for n-gram Language Models
- Authors: Ruizhe Huang, Ke Li, Ashish Arora, Dan Povey and Sanjeev Khudanpur
- Abstract summary: This paper presents an efficient algorithm for n-gram language model adaptation under the minimum discrimination information principle.
By taking advantage of the backoff structure of n-gram model and the idea of hierarchical training method, we show that MDI adaptation can be computed in linear-time complexity to the inputs in each iteration.
- Score: 25.67864542036985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents an efficient algorithm for n-gram language model
adaptation under the minimum discrimination information (MDI) principle, where
an out-of-domain language model is adapted to satisfy the constraints of
marginal probabilities of the in-domain data. The challenge for MDI language
model adaptation is its computational complexity. By taking advantage of the
backoff structure of n-gram model and the idea of hierarchical training method,
originally proposed for maximum entropy (ME) language models, we show that MDI
adaptation can be computed in linear-time complexity to the inputs in each
iteration. The complexity remains the same as ME models, although MDI is more
general than ME. This makes MDI adaptation practical for large corpus and
vocabulary. Experimental results confirm the scalability of our algorithm on
very large datasets, while MDI adaptation gets slightly worse perplexity but
better word error rate results compared to simple linear interpolation.
Related papers
- Model Utility Law: Evaluating LLMs beyond Performance through Mechanism Interpretable Metric [99.56567010306807]
Large Language Models (LLMs) have become indispensable across academia, industry, and daily applications.<n>One core challenge of evaluation in the large language model (LLM) era is the generalization issue.<n>We propose Model Utilization Index (MUI), a mechanism interpretability enhanced metric that complements traditional performance scores.
arXiv Detail & Related papers (2025-04-10T04:09:47Z) - Small Models, Big Impact: Efficient Corpus and Graph-Based Adaptation of Small Multilingual Language Models for Low-Resource Languages [10.418542753869433]
Low-resource languages (LRLs) face significant challenges in natural language processing (NLP) due to limited data.
Current state-of-the-art large language models (LLMs) still struggle with LRLs.
Small multilingual models (mLMs) such as mBERT and XLM-R offer greater promise due to a better fit of their capacity to low training data sizes.
arXiv Detail & Related papers (2025-02-14T13:10:39Z) - A Low-Complexity Plug-and-Play Deep Learning Model for Massive MIMO Precoding Across Sites [5.896656636095934]
MMIMO technology has transformed wireless communication by enhancing spectral efficiency and network capacity.
This paper proposes a novel deep learning-based mMIMO precoder to tackle the complexity challenges of existing approaches.
arXiv Detail & Related papers (2025-02-12T20:02:36Z) - Predictor-Corrector Enhanced Transformers with Exponential Moving Average Coefficient Learning [73.73967342609603]
We introduce a predictor-corrector learning framework to minimize truncation errors.
We also propose an exponential moving average-based coefficient learning method to strengthen our higher-order predictor.
Our model surpasses a robust 3.8B DeepNet by an average of 2.9 SacreBLEU, using only 1/3 parameters.
arXiv Detail & Related papers (2024-11-05T12:26:25Z) - Duo-LLM: A Framework for Studying Adaptive Computation in Large Language Models [16.16372459671255]
Large Language Models (LLMs) typically generate outputs token by token using a fixed compute budget.
We propose a novel framework that integrates smaller auxiliary modules within each Feed-Forward Network layer of the LLM.
We show that trained routers operate differently from oracles and often yield suboptimal solutions.
arXiv Detail & Related papers (2024-10-01T16:10:21Z) - MetaGPT: Merging Large Language Models Using Model Exclusive Task Arithmetic [6.46176287368784]
We propose textbfModel textbfExclusive textbfTask textbfArithmetic for merging textbfGPT-scale models.
Our proposed MetaGPT is data-agnostic and bypasses the heavy search process, making it cost-effective and easy to implement for LLMs.
arXiv Detail & Related papers (2024-06-17T10:12:45Z) - Sample Complexity Characterization for Linear Contextual MDPs [67.79455646673762]
Contextual decision processes (CMDPs) describe a class of reinforcement learning problems in which the transition kernels and reward functions can change over time with different MDPs indexed by a context variable.
CMDPs serve as an important framework to model many real-world applications with time-varying environments.
We study CMDPs under two linear function approximation models: Model I with context-varying representations and common linear weights for all contexts; and Model II with common representations for all contexts and context-varying linear weights.
arXiv Detail & Related papers (2024-02-05T03:25:04Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Predictable MDP Abstraction for Unsupervised Model-Based RL [93.91375268580806]
We propose predictable MDP abstraction (PMA)
Instead of training a predictive model on the original MDP, we train a model on a transformed MDP with a learned action space.
We theoretically analyze PMA and empirically demonstrate that PMA leads to significant improvements over prior unsupervised model-based RL approaches.
arXiv Detail & Related papers (2023-02-08T07:37:51Z) - Efficient Large Scale Language Modeling with Mixtures of Experts [61.45159383372181]
Mixture of Experts layers (MoEs) enable efficient scaling of language models through conditional computation.
This paper presents a detailed empirical study of how autoregressive MoE language models scale in comparison with dense models in a wide range of settings.
arXiv Detail & Related papers (2021-12-20T17:05:11Z) - Autoregressive Diffusion Models [34.125045462636386]
We introduce Autoregressive Diffusion Models (ARDMs), a model class encompassing and generalizing order-agnostic autoregressive models.
ARDMs are simple to implement and easy to train, and can be trained using an efficient objective similar to modern probabilistic diffusion models.
We show that ARDMs obtain compelling results not only on complete datasets, but also on compressing single data points.
arXiv Detail & Related papers (2021-10-05T13:36:55Z) - SDA: Improving Text Generation with Self Data Augmentation [88.24594090105899]
We propose to improve the standard maximum likelihood estimation (MLE) paradigm by incorporating a self-imitation-learning phase for automatic data augmentation.
Unlike most existing sentence-level augmentation strategies, our method is more general and could be easily adapted to any MLE-based training procedure.
arXiv Detail & Related papers (2021-01-02T01:15:57Z) - Bayesian Inference of Minimally Complex Models with Interactions of Arbitrary Order [0.0]
Minimally Complex Models (MCMs) are simple models that have interactions of arbitrarily high order grouped into independent components of minimal complexity.
We show that Bayesian model selection restricted to MCMs is computationally feasible and has many advantages.
Because MCMs have interactions of any order, they can reveal the presence of important high-order dependencies in the data.
arXiv Detail & Related papers (2020-08-02T16:57:02Z) - Expected Information Maximization: Using the I-Projection for Mixture
Density Estimation [22.096148237257644]
Modelling highly multi-modal data is a challenging problem in machine learning.
We present a new algorithm called Expected Information Maximization (EIM) for computing the I-projection.
We show that our algorithm is much more effective in computing the I-projection than recent GAN approaches.
arXiv Detail & Related papers (2020-01-23T17:24:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.