Low-Rank Constraints for Fast Inference in Structured Models
- URL: http://arxiv.org/abs/2201.02715v1
- Date: Sat, 8 Jan 2022 00:47:50 GMT
- Title: Low-Rank Constraints for Fast Inference in Structured Models
- Authors: Justin T. Chiu, Yuntian Deng, Alexander M. Rush
- Abstract summary: This work demonstrates a simple approach to reduce the computational and memory complexity of a large class of structured models.
Experiments with neural parameterized structured models for language modeling, polyphonic music modeling, unsupervised grammar induction, and video modeling show that our approach matches the accuracy of standard models at large state spaces.
- Score: 110.38427965904266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Structured distributions, i.e. distributions over combinatorial spaces, are
commonly used to learn latent probabilistic representations from observed data.
However, scaling these models is bottlenecked by the high computational and
memory complexity with respect to the size of the latent representations.
Common models such as Hidden Markov Models (HMMs) and Probabilistic
Context-Free Grammars (PCFGs) require time and space quadratic and cubic in the
number of hidden states respectively. This work demonstrates a simple approach
to reduce the computational and memory complexity of a large class of
structured models. We show that by viewing the central inference step as a
matrix-vector product and using a low-rank constraint, we can trade off model
expressivity and speed via the rank. Experiments with neural parameterized
structured models for language modeling, polyphonic music modeling,
unsupervised grammar induction, and video modeling show that our approach
matches the accuracy of standard models at large state spaces while providing
practical speedups.
Related papers
- Model Stealing for Any Low-Rank Language Model [25.16701867917684]
We build a theoretical understanding of stealing language models by studying a simple and mathematically tractable setting.
Our main result is an efficient algorithm in the conditional query model, for learning any low-rank distribution.
This is an interesting example where, at least theoretically, allowing a machine learning model to solve more complex problems at inference time can lead to drastic improvements in its performance.
arXiv Detail & Related papers (2024-11-12T04:25:31Z) - Neural Network-Based Piecewise Survival Models [0.3999851878220878]
A family of neural network-based survival models is presented.
The models can be seen as an extension of the commonly used discrete-time and piecewise exponential models.
arXiv Detail & Related papers (2024-03-27T15:08:00Z) - Compressing Sentence Representation with maximum Coding Rate Reduction [0.0]
In most natural language inference problems, sentence representation is needed for semantic retrieval tasks.
Due to space and time hardware limitations, there is a need to attain comparable results when using the smaller model.
We demonstrate that the new language model with reduced complexity and sentence embedding size can achieve comparable results on semantic retrieval benchmarks.
arXiv Detail & Related papers (2023-04-25T09:23:43Z) - Language Model Cascades [72.18809575261498]
Repeated interactions at test-time with a single model, or the composition of multiple models together, further expands capabilities.
Cases with control flow and dynamic structure require techniques from probabilistic programming.
We formalize several existing techniques from this perspective, including scratchpads / chain of thought, verifiers, STaR, selection-inference, and tool use.
arXiv Detail & Related papers (2022-07-21T07:35:18Z) - Neural Basis Models for Interpretability [33.51591891812176]
Generalized Additive Models (GAMs) are an inherently interpretable class of models.
We propose an entirely new subfamily of GAMs that utilize basis decomposition of shape functions.
A small number of basis functions are shared among all features, and are learned jointly for a given task.
arXiv Detail & Related papers (2022-05-27T17:31:19Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Equivalence of Segmental and Neural Transducer Modeling: A Proof of
Concept [56.46135010588918]
We prove that the widely used class of RNN-Transducer models and segmental models (direct HMM) are equivalent.
It is shown that blank probabilities translate into segment length probabilities and vice versa.
arXiv Detail & Related papers (2021-04-13T11:20:48Z) - Scaling Hidden Markov Language Models [118.55908381553056]
This work revisits the challenge of scaling HMMs to language modeling datasets.
We propose methods for scaling HMMs to massive state spaces while maintaining efficient exact inference, a compact parameterization, and effective regularization.
arXiv Detail & Related papers (2020-11-09T18:51:55Z) - S2RMs: Spatially Structured Recurrent Modules [105.0377129434636]
We take a step towards exploiting dynamic structure that are capable of simultaneously exploiting both modular andtemporal structures.
We find our models to be robust to the number of available views and better capable of generalization to novel tasks without additional training.
arXiv Detail & Related papers (2020-07-13T17:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.