Combining Online Learning and Offline Learning for Contextual Bandits
with Deficient Support
- URL: http://arxiv.org/abs/2107.11533v1
- Date: Sat, 24 Jul 2021 05:07:43 GMT
- Title: Combining Online Learning and Offline Learning for Contextual Bandits
with Deficient Support
- Authors: Hung Tran-The, Sunil Gupta, Thanh Nguyen-Tang, Santu Rana, Svetha
Venkatesh
- Abstract summary: Current offline-policy learning algorithms are mostly based on inverse propensity score (IPS) weighting.
We propose a novel approach that uses a hybrid of offline learning with online exploration.
Our approach determines an optimal policy with theoretical guarantees using the minimal number of online explorations.
- Score: 53.11601029040302
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address policy learning with logged data in contextual bandits. Current
offline-policy learning algorithms are mostly based on inverse propensity score
(IPS) weighting requiring the logging policy to have \emph{full support} i.e. a
non-zero probability for any context/action of the evaluation policy. However,
many real-world systems do not guarantee such logging policies, especially when
the action space is large and many actions have poor or missing rewards. With
such \emph{support deficiency}, the offline learning fails to find optimal
policies. We propose a novel approach that uses a hybrid of offline learning
with online exploration. The online exploration is used to explore unsupported
actions in the logged data whilst offline learning is used to exploit supported
actions from the logged data avoiding unnecessary explorations. Our approach
determines an optimal policy with theoretical guarantees using the minimal
number of online explorations. We demonstrate our algorithms' effectiveness
empirically on a diverse collection of datasets.
Related papers
- Rethinking Optimal Transport in Offline Reinforcement Learning [64.56896902186126]
In offline reinforcement learning, the data is provided by various experts and some of them can be sub-optimal.
To extract an efficient policy, it is necessary to emphstitch the best behaviors from the dataset.
We present an algorithm that aims to find a policy that maps states to a emphpartial distribution of the best expert actions for each given state.
arXiv Detail & Related papers (2024-10-17T22:36:43Z) - Understanding the performance gap between online and offline alignment algorithms [63.137832242488926]
We show that offline algorithms train policy to become good at pairwise classification, while online algorithms are good at generations.
This hints at a unique interplay between discriminative and generative capabilities, which is greatly impacted by the sampling process.
Our study sheds light on the pivotal role of on-policy sampling in AI alignment, and hints at certain fundamental challenges of offline alignment algorithms.
arXiv Detail & Related papers (2024-05-14T09:12:30Z) - Agnostic Interactive Imitation Learning: New Theory and Practical Algorithms [22.703438243976876]
We study interactive imitation learning, where a learner interactively queries a demonstrating expert for action annotations.
We propose a new oracle-efficient algorithm MFTPL-P with provable finite-sample guarantees.
arXiv Detail & Related papers (2023-12-28T07:05:30Z) - Efficient Online Reinforcement Learning with Offline Data [78.92501185886569]
We show that we can simply apply existing off-policy methods to leverage offline data when learning online.
We extensively ablate these design choices, demonstrating the key factors that most affect performance.
We see that correct application of these simple recommendations can provide a $mathbf2.5times$ improvement over existing approaches.
arXiv Detail & Related papers (2023-02-06T17:30:22Z) - Benchmarks and Algorithms for Offline Preference-Based Reward Learning [41.676208473752425]
We propose an approach that uses an offline dataset to craft preference queries via pool-based active learning.
Our proposed approach does not require actual physical rollouts or an accurate simulator for either the reward learning or policy optimization steps.
arXiv Detail & Related papers (2023-01-03T23:52:16Z) - Reinforcement Learning with Sparse Rewards using Guidance from Offline
Demonstration [9.017416068706579]
A major challenge in real-world reinforcement learning (RL) is the sparsity of reward feedback.
We develop an algorithm that exploits the offline demonstration data generated by a sub-optimal behavior policy.
We demonstrate the superior performance of our algorithm over state-of-the-art approaches.
arXiv Detail & Related papers (2022-02-09T18:45:40Z) - Curriculum Offline Imitation Learning [72.1015201041391]
offline reinforcement learning tasks require the agent to learn from a pre-collected dataset with no further interactions with the environment.
We propose textitCurriculum Offline Learning (COIL), which utilizes an experience picking strategy for imitating from adaptive neighboring policies with a higher return.
On continuous control benchmarks, we compare COIL against both imitation-based and RL-based methods, showing that it not only avoids just learning a mediocre behavior on mixed datasets but is also even competitive with state-of-the-art offline RL methods.
arXiv Detail & Related papers (2021-11-03T08:02:48Z) - MUSBO: Model-based Uncertainty Regularized and Sample Efficient Batch
Optimization for Deployment Constrained Reinforcement Learning [108.79676336281211]
Continuous deployment of new policies for data collection and online learning is either cost ineffective or impractical.
We propose a new algorithmic learning framework called Model-based Uncertainty regularized and Sample Efficient Batch Optimization.
Our framework discovers novel and high quality samples for each deployment to enable efficient data collection.
arXiv Detail & Related papers (2021-02-23T01:30:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.