Offline Reinforcement Learning with Causal Structured World Models
- URL: http://arxiv.org/abs/2206.01474v1
- Date: Fri, 3 Jun 2022 09:53:57 GMT
- Title: Offline Reinforcement Learning with Causal Structured World Models
- Authors: Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, Yang Yu
- Abstract summary: We show that causal world-models can outperform plain world-models for offline RL.
We propose a practical algorithm, oFfline mOdel-based reinforcement learning with CaUsal Structure (FOCUS)
- Score: 9.376353239574243
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Model-based methods have recently shown promising for offline reinforcement
learning (RL), aiming to learn good policies from historical data without
interacting with the environment. Previous model-based offline RL methods learn
fully connected nets as world-models that map the states and actions to the
next-step states. However, it is sensible that a world-model should adhere to
the underlying causal effect such that it will support learning an effective
policy generalizing well in unseen states. In this paper, We first provide
theoretical results that causal world-models can outperform plain world-models
for offline RL by incorporating the causal structure into the generalization
error bound. We then propose a practical algorithm, oFfline mOdel-based
reinforcement learning with CaUsal Structure (FOCUS), to illustrate the
feasibility of learning and leveraging causal structure in offline RL.
Experimental results on two benchmarks show that FOCUS reconstructs the
underlying causal structure accurately and robustly. Consequently, it performs
better than the plain model-based offline RL algorithms and other causal
model-based RL algorithms.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.