Leveraging Factored Action Spaces for Efficient Offline Reinforcement
Learning in Healthcare
- URL: http://arxiv.org/abs/2305.01738v1
- Date: Tue, 2 May 2023 19:13:10 GMT
- Title: Leveraging Factored Action Spaces for Efficient Offline Reinforcement
Learning in Healthcare
- Authors: Shengpu Tang, Maggie Makar, Michael W. Sjoding, Finale Doshi-Velez,
Jenna Wiens
- Abstract summary: We propose a form of linear Q-function decomposition induced by factored action spaces.
Our approach can help an agent make more accurate inferences within underexplored regions of the state-action space.
- Score: 38.42691031505782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many reinforcement learning (RL) applications have combinatorial action
spaces, where each action is a composition of sub-actions. A standard RL
approach ignores this inherent factorization structure, resulting in a
potential failure to make meaningful inferences about rarely observed
sub-action combinations; this is particularly problematic for offline settings,
where data may be limited. In this work, we propose a form of linear Q-function
decomposition induced by factored action spaces. We study the theoretical
properties of our approach, identifying scenarios where it is guaranteed to
lead to zero bias when used to approximate the Q-function. Outside the regimes
with theoretical guarantees, we show that our approach can still be useful
because it leads to better sample efficiency without necessarily sacrificing
policy optimality, allowing us to achieve a better bias-variance trade-off.
Across several offline RL problems using simulators and real-world datasets
motivated by healthcare, we demonstrate that incorporating factored action
spaces into value-based RL can result in better-performing policies. Our
approach can help an agent make more accurate inferences within underexplored
regions of the state-action space when applying RL to observational datasets.
Related papers
- Sparsity-based Safety Conservatism for Constrained Offline Reinforcement Learning [4.0847743592744905]
Reinforcement Learning (RL) has made notable success in decision-making fields like autonomous driving and robotic manipulation.
RL's training approach, centered on "on-policy" sampling, doesn't fully capitalize on data.
offline RL has emerged as a compelling alternative, particularly in conducting additional experiments is impractical.
arXiv Detail & Related papers (2024-07-17T20:57:05Z) - Revisiting the Linear-Programming Framework for Offline RL with General
Function Approximation [24.577243536475233]
offline reinforcement learning (RL) concerns pursuing an optimal policy for sequential decision-making from a pre-collected dataset.
Recent theoretical progress has focused on developing sample-efficient offline RL algorithms with various relaxed assumptions on data coverage and function approximators.
We revisit the linear-programming framework for offline RL, and advance the existing results in several aspects.
arXiv Detail & Related papers (2022-12-28T15:28:12Z) - Towards Deployment-Efficient Reinforcement Learning: Lower Bound and
Optimality [141.89413461337324]
Deployment efficiency is an important criterion for many real-world applications of reinforcement learning (RL)
We propose a theoretical formulation for deployment-efficient RL (DE-RL) from an "optimization with constraints" perspective.
arXiv Detail & Related papers (2022-02-14T01:31:46Z) - Offline Reinforcement Learning: Fundamental Barriers for Value Function
Approximation [74.3002974673248]
We consider the offline reinforcement learning problem, where the aim is to learn a decision making policy from logged data.
offline RL is becoming increasingly relevant in practice, because online data collection is well suited to safety-critical domains.
Our results show that sample-efficient offline reinforcement learning requires either restrictive coverage conditions or representation conditions that go beyond complexity learning.
arXiv Detail & Related papers (2021-11-21T23:22:37Z) - False Correlation Reduction for Offline Reinforcement Learning [115.11954432080749]
We propose falSe COrrelation REduction (SCORE) for offline RL, a practically effective and theoretically provable algorithm.
We empirically show that SCORE achieves the SoTA performance with 3.1x acceleration on various tasks in a standard benchmark (D4RL)
arXiv Detail & Related papers (2021-10-24T15:34:03Z) - Instabilities of Offline RL with Pre-Trained Neural Representation [127.89397629569808]
In offline reinforcement learning (RL), we seek to utilize offline data to evaluate (or learn) policies in scenarios where the data are collected from a distribution that substantially differs from that of the target policy to be evaluated.
Recent theoretical advances have shown that such sample-efficient offline RL is indeed possible provided certain strong representational conditions hold.
This work studies these issues from an empirical perspective to gauge how stable offline RL methods are.
arXiv Detail & Related papers (2021-03-08T18:06:44Z) - Discrete Action On-Policy Learning with Action-Value Critic [72.20609919995086]
Reinforcement learning (RL) in discrete action space is ubiquitous in real-world applications, but its complexity grows exponentially with the action-space dimension.
We construct a critic to estimate action-value functions, apply it on correlated actions, and combine these critic estimated action values to control the variance of gradient estimation.
These efforts result in a new discrete action on-policy RL algorithm that empirically outperforms related on-policy algorithms relying on variance control techniques.
arXiv Detail & Related papers (2020-02-10T04:23:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.