Towards Theoretical Understanding of Data-Driven Policy Refinement
- URL: http://arxiv.org/abs/2305.06796v2
- Date: Mon, 15 May 2023 13:31:03 GMT
- Title: Towards Theoretical Understanding of Data-Driven Policy Refinement
- Authors: Ali Baheri
- Abstract summary: This paper presents an approach for data-driven policy refinement in reinforcement learning, specifically designed for safety-critical applications.
Our principal contribution lies in the mathematical formulation of this data-driven policy refinement concept.
We present a series of theorems elucidating key theoretical properties of our approach, including convergence, robustness bounds, generalization error, and resilience to model mismatch.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents an approach for data-driven policy refinement in
reinforcement learning, specifically designed for safety-critical applications.
Our methodology leverages the strengths of data-driven optimization and
reinforcement learning to enhance policy safety and optimality through
iterative refinement. Our principal contribution lies in the mathematical
formulation of this data-driven policy refinement concept. This framework
systematically improves reinforcement learning policies by learning from
counterexamples identified during data-driven verification. Furthermore, we
present a series of theorems elucidating key theoretical properties of our
approach, including convergence, robustness bounds, generalization error, and
resilience to model mismatch. These results not only validate the effectiveness
of our methodology but also contribute to a deeper understanding of its
behavior in different environments and scenarios.
Related papers
- SAMBO-RL: Shifts-aware Model-based Offline Reinforcement Learning [9.88109749688605]
Model-based Offline Reinforcement Learning trains policies based on offline datasets and model dynamics.
This paper disentangles the problem into two key components: model bias and policy shift.
We introduce Shifts-aware Model-based Offline Reinforcement Learning (SAMBO-RL)
arXiv Detail & Related papers (2024-08-23T04:25:09Z) - Positivity-free Policy Learning with Observational Data [8.293758599118618]
This study introduces a novel positivity-free (stochastic) policy learning framework.
We propose incremental propensity score policies to adjust propensity score values instead of assigning fixed values to treatments.
This paper provides a thorough exploration of the theoretical guarantees associated with policy learning and validates the proposed framework's finite-sample performance.
arXiv Detail & Related papers (2023-10-10T19:47:27Z) - Online Estimation and Inference for Robust Policy Evaluation in
Reinforcement Learning [7.875680651592574]
We develop an online robust policy evaluation procedure, and establish the limiting distribution of our estimator, based on its Bahadur representation.
This paper bridges the gap between robust statistics and statistical inference in reinforcement learning, offering a more versatile and reliable approach to policy evaluation.
arXiv Detail & Related papers (2023-10-04T04:57:35Z) - Statistically Efficient Variance Reduction with Double Policy Estimation
for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning [53.97273491846883]
We propose DPE: an RL algorithm that blends offline sequence modeling and offline reinforcement learning with Double Policy Estimation.
We validate our method in multiple tasks of OpenAI Gym with D4RL benchmarks.
arXiv Detail & Related papers (2023-08-28T20:46:07Z) - Representation-Driven Reinforcement Learning [57.44609759155611]
We present a representation-driven framework for reinforcement learning.
By representing policies as estimates of their expected values, we leverage techniques from contextual bandits to guide exploration and exploitation.
We demonstrate the effectiveness of this framework through its application to evolutionary and policy gradient-based approaches.
arXiv Detail & Related papers (2023-05-31T14:59:12Z) - A Regularized Implicit Policy for Offline Reinforcement Learning [54.7427227775581]
offline reinforcement learning enables learning from a fixed dataset, without further interactions with the environment.
We propose a framework that supports learning a flexible yet well-regularized fully-implicit policy.
Experiments and ablation study on the D4RL dataset validate our framework and the effectiveness of our algorithmic designs.
arXiv Detail & Related papers (2022-02-19T20:22:04Z) - Verified Probabilistic Policies for Deep Reinforcement Learning [6.85316573653194]
We tackle the problem of verifying probabilistic policies for deep reinforcement learning.
We propose an abstraction approach, based on interval Markov decision processes, that yields guarantees on a policy's execution.
We present techniques to build and solve these models using abstract interpretation, mixed-integer linear programming, entropy-based refinement and probabilistic model checking.
arXiv Detail & Related papers (2022-01-10T23:55:04Z) - Reliable Off-policy Evaluation for Reinforcement Learning [53.486680020852724]
In a sequential decision-making problem, off-policy evaluation estimates the expected cumulative reward of a target policy.
We propose a novel framework that provides robust and optimistic cumulative reward estimates using one or multiple logged data.
arXiv Detail & Related papers (2020-11-08T23:16:19Z) - Stable Policy Optimization via Off-Policy Divergence Regularization [50.98542111236381]
Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO) are among the most successful policy gradient approaches in deep reinforcement learning (RL)
We propose a new algorithm which stabilizes the policy improvement through a proximity term that constrains the discounted state-action visitation distribution induced by consecutive policies to be close to one another.
Our proposed method can have a beneficial effect on stability and improve final performance in benchmark high-dimensional control tasks.
arXiv Detail & Related papers (2020-03-09T13:05:47Z) - Cautious Reinforcement Learning with Logical Constraints [78.96597639789279]
An adaptive safe padding forces Reinforcement Learning (RL) to synthesise optimal control policies while ensuring safety during the learning process.
Theoretical guarantees are available on the optimality of the synthesised policies and on the convergence of the learning algorithm.
arXiv Detail & Related papers (2020-02-26T00:01:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.