Revisiting Recurrent Reinforcement Learning with Memory Monoids
- URL: http://arxiv.org/abs/2402.09900v2
- Date: Sun, 17 Mar 2024 15:16:28 GMT
- Title: Revisiting Recurrent Reinforcement Learning with Memory Monoids
- Authors: Steven Morad, Chris Lu, Ryan Kortvelesy, Stephan Liwicki, Jakob Foerster, Amanda Prorok,
- Abstract summary: We study memory models such as Recurrent Neural Networks (RNNs) and Partially Observable Decision Transformers (POMDPs)
Neither model scales particularly well to long sequences, especially compared to an emerging class of memory models sometimes called linear recurrent models.
We propose a method of memory monoids to propose a method that improves sample efficiency, increases the return, and simplifies the implementation of recurrent loss functions in RL.
- Score: 11.302674177386383
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Memory models such as Recurrent Neural Networks (RNNs) and Transformers address Partially Observable Markov Decision Processes (POMDPs) by mapping trajectories to latent Markov states. Neither model scales particularly well to long sequences, especially compared to an emerging class of memory models sometimes called linear recurrent models. We discover that we can model the recurrent update of these models using a monoid, leading us to reformulate existing models using a novel memory monoid framework. We revisit the traditional approach to batching in recurrent RL, highlighting both theoretical and empirical deficiencies. We leverage the properties of memory monoids to propose a batching method that improves sample efficiency, increases the return, and simplifies the implementation of recurrent loss functions in RL.
Related papers
- Mamba-PTQ: Outlier Channels in Recurrent Large Language Models [49.1574468325115]
We show that Mamba models exhibit the same pattern of outlier channels observed in attention-based LLMs.
We show that the reason for the difficulty of quantizing SSMs is caused by activation outliers, similar to those observed in transformer-based LLMs.
arXiv Detail & Related papers (2024-07-17T08:21:06Z) - Rethinking Pruning Large Language Models: Benefits and Pitfalls of Reconstruction Error Minimization [18.24882084542254]
We present an array of reconstruction techniques that can significantly reduce this error by more than $90%$.
We find out that a strategy of self-generating calibration data can mitigate this trade-off between reconstruction and generalization.
arXiv Detail & Related papers (2024-06-21T05:13:34Z) - Rethinking Model Re-Basin and Linear Mode Connectivity [1.1510009152620668]
We decompose re-normalization into rescaling and reshift, uncovering that rescaling plays a crucial role in re-normalization.
We identify that the merged model suffers from the issue of activation collapse and magnitude collapse.
We propose a new perspective to unify the re-basin and pruning, under which a lightweight yet effective post-pruning technique is derived.
arXiv Detail & Related papers (2024-02-05T17:06:26Z) - ResMem: Learn what you can and memorize the rest [79.19649788662511]
We propose the residual-memorization (ResMem) algorithm to augment an existing prediction model.
By construction, ResMem can explicitly memorize the training labels.
We show that ResMem consistently improves the test set generalization of the original prediction model.
arXiv Detail & Related papers (2023-02-03T07:12:55Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Bayesian Active Learning for Discrete Latent Variable Models [19.852463786440122]
Active learning seeks to reduce the amount of data required to fit the parameters of a model.
latent variable models play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines.
arXiv Detail & Related papers (2022-02-27T19:07:12Z) - Scaling Hidden Markov Language Models [118.55908381553056]
This work revisits the challenge of scaling HMMs to language modeling datasets.
We propose methods for scaling HMMs to massive state spaces while maintaining efficient exact inference, a compact parameterization, and effective regularization.
arXiv Detail & Related papers (2020-11-09T18:51:55Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [55.28436972267793]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Dynamic Model Pruning with Feedback [64.019079257231]
We propose a novel model compression method that generates a sparse trained model without additional overhead.
We evaluate our method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models.
arXiv Detail & Related papers (2020-06-12T15:07:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.