GAN-based Intrinsic Exploration For Sample Efficient Reinforcement
Learning
- URL: http://arxiv.org/abs/2206.14256v1
- Date: Tue, 28 Jun 2022 19:16:52 GMT
- Title: GAN-based Intrinsic Exploration For Sample Efficient Reinforcement
Learning
- Authors: Do\u{g}ay Kamar (1), Naz{\i}m Kemal \"Ure (1 and 2), G\"ozde \"Unal (1
and 2) ((1) Faculty of Computer and Informatics, Istanbul Technical
University (2) Artificial Intelligence and Data Science Research Center,
Istanbul Technical University)
- Abstract summary: We propose a Geneversarative Adversarial Network-based Intrinsic Reward Module that learns the distribution of the observed states and sends an intrinsic reward that is computed as high for states that are out of distribution.
We evaluate our approach in Super Mario Bros for a no reward setting and in Montezuma's Revenge for a sparse reward setting and show that our approach is indeed capable of exploring efficiently.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, we address the problem of efficient exploration in
reinforcement learning. Most common exploration approaches depend on random
action selection, however these approaches do not work well in environments
with sparse or no rewards. We propose Generative Adversarial Network-based
Intrinsic Reward Module that learns the distribution of the observed states and
sends an intrinsic reward that is computed as high for states that are out of
distribution, in order to lead agent to unexplored states. We evaluate our
approach in Super Mario Bros for a no reward setting and in Montezuma's Revenge
for a sparse reward setting and show that our approach is indeed capable of
exploring efficiently. We discuss a few weaknesses and conclude by discussing
future works.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.