EVaDE : Event-Based Variational Thompson Sampling for Model-Based Reinforcement Learning
- URL: http://arxiv.org/abs/2501.09611v1
- Date: Thu, 16 Jan 2025 15:35:48 GMT
- Title: EVaDE : Event-Based Variational Thompson Sampling for Model-Based Reinforcement Learning
- Authors: Siddharth Aravindan, Dixant Mittal, Wee Sun Lee,
- Abstract summary: Posterior Sampling for Reinforcement Learning (PSRL) is an algorithm that augments model-based reinforcement learning algorithms with Thompson sampling.<n>Recent works show that dropout, used in conjunction with neural networks, induces variational distributions that can approximate these posteriors.<n>We propose Event-based Variational Distributions for Exploration (EVaDE), which are variational distributions that are useful for MBRL.
- Score: 13.322155764694275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Posterior Sampling for Reinforcement Learning (PSRL) is a well-known algorithm that augments model-based reinforcement learning (MBRL) algorithms with Thompson sampling. PSRL maintains posterior distributions of the environment transition dynamics and the reward function, which are intractable for tasks with high-dimensional state and action spaces. Recent works show that dropout, used in conjunction with neural networks, induces variational distributions that can approximate these posteriors. In this paper, we propose Event-based Variational Distributions for Exploration (EVaDE), which are variational distributions that are useful for MBRL, especially when the underlying domain is object-based. We leverage the general domain knowledge of object-based domains to design three types of event-based convolutional layers to direct exploration. These layers rely on Gaussian dropouts and are inserted between the layers of the deep neural network model to help facilitate variational Thompson sampling. We empirically show the effectiveness of EVaDE-equipped Simulated Policy Learning (EVaDE-SimPLe) on the 100K Atari game suite.
Related papers
- Generative Diffusion Models for Resource Allocation in Wireless Networks [77.36145730415045]
We train a policy to imitate an expert and generate new samples from the optimal distribution.
We achieve near-optimal performance through sequential execution of the generated samples.
We present numerical results in a case study of power control in multi-user interference networks.
arXiv Detail & Related papers (2025-04-28T21:44:31Z) - Applying the maximum entropy principle to neural networks enhances multi-species distribution models [5.6578808468308335]
We propose DeepMaxent, which harnesses neural networks to automatically learn shared features among species.
We evaluate DeepMaxent on a benchmark dataset known for its spatial sampling biases.
Our results indicate that DeepMaxent performs better than Maxent across all regions and groups.
arXiv Detail & Related papers (2024-12-26T13:47:04Z) - Generalized Bayesian deep reinforcement learning [2.469908534801392]
We propose to model the dynamics of the unknown environment through deep generative models assuming Markov dependence.<n>In absence of likelihood functions for these models we train them by learning a generalized predictive-sequential (or prequential) scoring rule (SR) posterior.<n>For policy learning, we propose expected Thompson sampling (ETS) to learn the optimal policy by maximizing the expected value function with respect to the posterior distribution.
arXiv Detail & Related papers (2024-12-16T13:02:17Z) - Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [63.31328039424469]
This tutorial provides a comprehensive survey of methods for fine-tuning diffusion models to optimize downstream reward functions.
We explain the application of various RL algorithms, including PPO, differentiable optimization, reward-weighted MLE, value-weighted sampling, and path consistency learning.
arXiv Detail & Related papers (2024-07-18T17:35:32Z) - Minimizing Energy Costs in Deep Learning Model Training: The Gaussian Sampling Approach [11.878350833222711]
We propose a method called em GradSamp for sampling gradient updates from a Gaussian distribution.
em GradSamp not only streamlines gradient but also enables skipping entire epochs, thereby enhancing overall efficiency.
We rigorously validate our hypothesis across a diverse set of standard and non-standard CNN and transformer-based models.
arXiv Detail & Related papers (2024-06-11T15:01:20Z) - Out of the Ordinary: Spectrally Adapting Regression for Covariate Shift [12.770658031721435]
We propose a method for adapting the weights of the last layer of a pre-trained neural regression model to perform better on input data originating from a different distribution.
We demonstrate how this lightweight spectral adaptation procedure can improve out-of-distribution performance for synthetic and real-world datasets.
arXiv Detail & Related papers (2023-12-29T04:15:58Z) - Training Deep 3D Convolutional Neural Networks to Extract BSM Physics Parameters Directly from HEP Data: a Proof-of-Concept Study Using Monte Carlo Simulations [0.0]
We propose a simple but novel data representation that transforms the angular and kinematic distributions into "quasi-images"
As a proof-of-concept, we train a 34-layer Residual Neural Network to regress on these images and determine information about the Wilson Coefficient $C_9$ in Monte Carlo simulations of $B0 rightarrow K*0mu+mu-$ decays.
arXiv Detail & Related papers (2023-11-21T23:49:51Z) - Thompson sampling for improved exploration in GFlowNets [75.89693358516944]
Generative flow networks (GFlowNets) are amortized variational inference algorithms that treat sampling from a distribution over compositional objects as a sequential decision-making problem with a learnable action policy.
We show in two domains that TS-GFN yields improved exploration and thus faster convergence to the target distribution than the off-policy exploration strategies used in past work.
arXiv Detail & Related papers (2023-06-30T14:19:44Z) - Learning Joint Latent Space EBM Prior Model for Multi-layer Generator [44.4434704520236]
We study the fundamental problem of learning multi-layer generator models.
We propose an energy-based model (EBM) on the joint latent space over all layers of latent variables.
Our experiments demonstrate that the learned model can be expressive in generating high-quality images.
arXiv Detail & Related papers (2023-06-10T00:27:37Z) - WLD-Reg: A Data-dependent Within-layer Diversity Regularizer [98.78384185493624]
Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization.
We propose to complement this traditional 'between-layer' feedback with additional 'within-layer' feedback to encourage the diversity of the activations within the same layer.
We present an extensive empirical study confirming that the proposed approach enhances the performance of several state-of-the-art neural network models in multiple tasks.
arXiv Detail & Related papers (2023-01-03T20:57:22Z) - Predictive Coding beyond Gaussian Distributions [38.51699576854394]
Predictive coding (PC) is a neuroscience-inspired method that performs inference on hierarchical Gaussian generative models.
These methods fail to keep up with modern neural networks, as they are unable to replicate the dynamics of complex layers and activation functions.
We show that our method allows us to train transformer networks and achieve a performance comparable with BP on conditional language models.
arXiv Detail & Related papers (2022-11-07T12:02:05Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.