Principled Exploration via Optimistic Bootstrapping and Backward
Induction
- URL: http://arxiv.org/abs/2105.06022v2
- Date: Mon, 17 May 2021 00:22:00 GMT
- Title: Principled Exploration via Optimistic Bootstrapping and Backward
Induction
- Authors: Chenjia Bai, Lingxiao Wang, Lei Han, Jianye Hao, Animesh Garg, Peng
Liu, Zhaoran Wang
- Abstract summary: We propose a principled exploration method for Deep Reinforcement Learning (DRL) through Optimistic Bootstrapping and Backward Induction (OB2I)
OB2I constructs a general-purpose UCB-bonus through non-parametric bootstrap in DRL.
We build theoretical connections between the proposed UCB-bonus and the LSVI-UCB in a linear setting.
- Score: 84.78836146128238
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One principled approach for provably efficient exploration is incorporating
the upper confidence bound (UCB) into the value function as a bonus. However,
UCB is specified to deal with linear and tabular settings and is incompatible
with Deep Reinforcement Learning (DRL). In this paper, we propose a principled
exploration method for DRL through Optimistic Bootstrapping and Backward
Induction (OB2I). OB2I constructs a general-purpose UCB-bonus through
non-parametric bootstrap in DRL. The UCB-bonus estimates the epistemic
uncertainty of state-action pairs for optimistic exploration. We build
theoretical connections between the proposed UCB-bonus and the LSVI-UCB in a
linear setting. We propagate future uncertainty in a time-consistent manner
through episodic backward update, which exploits the theoretical advantage and
empirically improves the sample-efficiency. Our experiments in the MNIST maze
and Atari suite suggest that OB2I outperforms several state-of-the-art
exploration approaches.
Related papers
- Offline Behavior Distillation [57.6900189406964]
Massive reinforcement learning (RL) data are typically collected to train policies offline without the need for interactions.
We formulate offline behavior distillation (OBD), which synthesizes limited expert behavioral data from sub-optimal RL data.
We propose two naive OBD objectives, DBC and PBC, which measure distillation performance via the decision difference between policies trained on distilled data and either offline data or a near-expert policy.
arXiv Detail & Related papers (2024-10-30T06:28:09Z) - Preference-Guided Reinforcement Learning for Efficient Exploration [7.83845308102632]
We introduce LOPE: Learning Online with trajectory Preference guidancE, an end-to-end preference-guided RL framework.
Our intuition is that LOPE directly adjusts the focus of online exploration by considering human feedback as guidance.
LOPE outperforms several state-of-the-art methods regarding convergence rate and overall performance.
arXiv Detail & Related papers (2024-07-09T02:11:12Z) - Accelerating material discovery with a threshold-driven hybrid
acquisition policy-based Bayesian optimization [4.021352247826289]
This paper introduces a novel Threshold-Driven UCB-EI Bayesian Optimization (TDUE-BO) method.
It dynamically integrates the strengths of Upper Confidence Bound (UCB) and Expected Improvement (EI) acquisition functions to optimize the material discovery process.
It shows significantly better approximation and optimization performance over the EI and UCB-based BO methods in terms of the RMSE scores and convergence efficiency.
arXiv Detail & Related papers (2023-11-16T06:02:48Z) - Model-based Causal Bayesian Optimization [74.78486244786083]
We introduce the first algorithm for Causal Bayesian Optimization with Multiplicative Weights (CBO-MW)
We derive regret bounds for CBO-MW that naturally depend on graph-related quantities.
Our experiments include a realistic demonstration of how CBO-MW can be used to learn users' demand patterns in a shared mobility system.
arXiv Detail & Related papers (2023-07-31T13:02:36Z) - BOF-UCB: A Bayesian-Optimistic Frequentist Algorithm for Non-Stationary
Contextual Bandits [16.59103967569845]
We propose a novel Bayesian-Optimistic Frequentist Upper Confidence Bound (BOF-UCB) algorithm for contextual linear bandits in non-stationary environments.
This unique combination of Bayesian and frequentist principles enhances adaptability and performance in dynamic settings.
arXiv Detail & Related papers (2023-07-07T13:29:07Z) - Dynamic Exploration-Exploitation Trade-Off in Active Learning Regression
with Bayesian Hierarchical Modeling [4.132882666134921]
Methods that consider exploration-exploitation simultaneously employ fixed or ad-hoc measures to control the trade-off that may not be optimal.
We develop a Bayesian hierarchical approach, referred as BHEEM, to dynamically balance the exploration-exploitation trade-off.
arXiv Detail & Related papers (2023-04-16T01:40:48Z) - Rewarding Episodic Visitation Discrepancy for Exploration in
Reinforcement Learning [64.8463574294237]
We propose Rewarding Episodic Visitation Discrepancy (REVD) as an efficient and quantified exploration method.
REVD provides intrinsic rewards by evaluating the R'enyi divergence-based visitation discrepancy between episodes.
It is tested on PyBullet Robotics Environments and Atari games.
arXiv Detail & Related papers (2022-09-19T08:42:46Z) - BADDr: Bayes-Adaptive Deep Dropout RL for POMDPs [22.78390558602203]
We present a representation-agnostic formulation of BRL under partially observability, unifying the previous models under one theoretical umbrella.
We also propose a novel derivation, Bayes-Adaptive Deep Dropout rl (BADDr), based on dropout networks.
arXiv Detail & Related papers (2022-02-17T19:48:35Z) - High-Dimensional Bayesian Optimisation with Variational Autoencoders and
Deep Metric Learning [119.91679702854499]
We introduce a method based on deep metric learning to perform Bayesian optimisation over high-dimensional, structured input spaces.
We achieve such an inductive bias using just 1% of the available labelled data.
As an empirical contribution, we present state-of-the-art results on real-world high-dimensional black-box optimisation problems.
arXiv Detail & Related papers (2021-06-07T13:35:47Z) - Provably Efficient Reward-Agnostic Navigation with Linear Value
Iteration [143.43658264904863]
We show how iteration under a more standard notion of low inherent Bellman error, typically employed in least-square value-style algorithms, can provide strong PAC guarantees on learning a near optimal value function.
We present a computationally tractable algorithm for the reward-free setting and show how it can be used to learn a near optimal policy for any (linear) reward function.
arXiv Detail & Related papers (2020-08-18T04:34:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.