Optimizing Pessimism in Dynamic Treatment Regimes: A Bayesian Learning
Approach
- URL: http://arxiv.org/abs/2210.14420v1
- Date: Wed, 26 Oct 2022 02:14:10 GMT
- Title: Optimizing Pessimism in Dynamic Treatment Regimes: A Bayesian Learning
Approach
- Authors: Yunzhe Zhou, Zhengling Qi, Chengchun Shi and Lexin Li
- Abstract summary: We propose a novel pessimism-based Bayesian learning method for optimal dynamic treatment regimes in the offline setting.
We integrate the pessimism principle with Thompson sampling and Bayesian machine learning for optimizing the degree of pessimism.
We develop the computational algorithm based on variational inference, which is highly efficient and scalable.
- Score: 6.7826352751791985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this article, we propose a novel pessimism-based Bayesian learning method
for optimal dynamic treatment regimes in the offline setting. When the coverage
condition does not hold, which is common for offline data, the existing
solutions would produce sub-optimal policies. The pessimism principle addresses
this issue by discouraging recommendation of actions that are less explored
conditioning on the state. However, nearly all pessimism-based methods rely on
a key hyper-parameter that quantifies the degree of pessimism, and the
performance of the methods can be highly sensitive to the choice of this
parameter. We propose to integrate the pessimism principle with Thompson
sampling and Bayesian machine learning for optimizing the degree of pessimism.
We derive a credible set whose boundary uniformly lower bounds the optimal
Q-function, and thus does not require additional tuning of the degree of
pessimism. We develop a general Bayesian learning method that works with a
range of models, from Bayesian linear basis model to Bayesian neural network
model. We develop the computational algorithm based on variational inference,
which is highly efficient and scalable. We establish the theoretical guarantees
of the proposed method, and show empirically that it outperforms the existing
state-of-the-art solutions through both simulations and a real data example.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - Pessimistic Q-Learning for Offline Reinforcement Learning: Towards
Optimal Sample Complexity [51.476337785345436]
We study a pessimistic variant of Q-learning in the context of finite-horizon Markov decision processes.
A variance-reduced pessimistic Q-learning algorithm is proposed to achieve near-optimal sample complexity.
arXiv Detail & Related papers (2022-02-28T15:39:36Z) - A Surrogate Objective Framework for Prediction+Optimization with Soft
Constraints [29.962390392493507]
Decision-focused prediction approaches, such as SPO+ and direct optimization, have been proposed to fill this gap.
This paper proposes a novel analytically differentiable surrogate objective framework for real-world linear and semi-definite negative quadratic programming problems.
arXiv Detail & Related papers (2021-11-22T17:09:57Z) - Bayesian Optimisation for Constrained Problems [0.0]
We propose a novel variant of the well-known Knowledge Gradient acquisition function that allows it to handle constraints.
We empirically compare the new algorithm with four other state-of-the-art constrained Bayesian optimisation algorithms and demonstrate its superior performance.
arXiv Detail & Related papers (2021-05-27T15:43:09Z) - On the Optimality of Batch Policy Optimization Algorithms [106.89498352537682]
Batch policy optimization considers leveraging existing data for policy construction before interacting with an environment.
We show that any confidence-adjusted index algorithm is minimax optimal, whether it be optimistic, pessimistic or neutral.
We introduce a new weighted-minimax criterion that considers the inherent difficulty of optimal value prediction.
arXiv Detail & Related papers (2021-04-06T05:23:20Z) - An Online Prediction Approach Based on Incremental Support Vector
Machine for Dynamic Multiobjective Optimization [19.336520152294213]
We propose a novel prediction algorithm based on incremental support vector machine (ISVM)
We treat the solving of dynamic multiobjective optimization problems (DMOPs) as an online learning process.
The proposed algorithm can effectively tackle dynamic multiobjective optimization problems.
arXiv Detail & Related papers (2021-02-24T08:51:23Z) - COMBO: Conservative Offline Model-Based Policy Optimization [120.55713363569845]
Uncertainty estimation with complex models, such as deep neural networks, can be difficult and unreliable.
We develop a new model-based offline RL algorithm, COMBO, that regularizes the value function on out-of-support state-actions.
We find that COMBO consistently performs as well or better as compared to prior offline model-free and model-based methods.
arXiv Detail & Related papers (2021-02-16T18:50:32Z) - Is Pessimism Provably Efficient for Offline RL? [104.00628430454479]
We study offline reinforcement learning (RL), which aims to learn an optimal policy based on a dataset collected a priori.
We propose a pessimistic variant of the value iteration algorithm (PEVI), which incorporates an uncertainty quantifier as the penalty function.
arXiv Detail & Related papers (2020-12-30T09:06:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.