Bounding-Box Inference for Error-Aware Model-Based Reinforcement Learning
- URL: http://arxiv.org/abs/2406.16006v1
- Date: Sun, 23 Jun 2024 04:23:15 GMT
- Title: Bounding-Box Inference for Error-Aware Model-Based Reinforcement Learning
- Authors: Erin J. Talvitie, Zilei Shao, Huiying Li, Jinghan Hu, Jacob Boerma, Rory Zhao, Xintong Wang,
- Abstract summary: In model-based reinforcement learning, simulated experiences are often treated as equivalent to experience from the real environment.
We show that best results require distribution insensitive inference to estimate the uncertainty over model-based updates.
We find that bounding-box inference can reliably support effective selective planning.
- Score: 4.185571779339683
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In model-based reinforcement learning, simulated experiences from the learned model are often treated as equivalent to experience from the real environment. However, when the model is inaccurate, it can catastrophically interfere with policy learning. Alternatively, the agent might learn about the model's accuracy and selectively use it only when it can provide reliable predictions. We empirically explore model uncertainty measures for selective planning and show that best results require distribution insensitive inference to estimate the uncertainty over model-based updates. To that end, we propose and evaluate bounding-box inference, which operates on bounding-boxes around sets of possible states and other quantities. We find that bounding-box inference can reliably support effective selective planning.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.