Tuning Mixed Input Hyperparameters on the Fly for Efficient Population
Based AutoRL
- URL: http://arxiv.org/abs/2106.15883v1
- Date: Wed, 30 Jun 2021 08:15:59 GMT
- Title: Tuning Mixed Input Hyperparameters on the Fly for Efficient Population
Based AutoRL
- Authors: Jack Parker-Holder and Vu Nguyen and Shaan Desai and Stephen Roberts
- Abstract summary: We introduce a new efficient hierarchical approach for optimizing both continuous and categorical variables.
We show that explicitly modelling dependence between data augmentation and other hyper parameters improves generalization.
- Score: 12.135280422000635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite a series of recent successes in reinforcement learning (RL), many RL
algorithms remain sensitive to hyperparameters. As such, there has recently
been interest in the field of AutoRL, which seeks to automate design decisions
to create more general algorithms. Recent work suggests that population based
approaches may be effective AutoRL algorithms, by learning hyperparameter
schedules on the fly. In particular, the PB2 algorithm is able to achieve
strong performance in RL tasks by formulating online hyperparameter
optimization as time varying GP-bandit problem, while also providing
theoretical guarantees. However, PB2 is only designed to work for continuous
hyperparameters, which severely limits its utility in practice. In this paper
we introduce a new (provably) efficient hierarchical approach for optimizing
both continuous and categorical variables, using a new time-varying bandit
algorithm specifically designed for the population based training regime. We
evaluate our approach on the challenging Procgen benchmark, where we show that
explicitly modelling dependence between data augmentation and other
hyperparameters improves generalization.
Related papers
- ARLBench: Flexible and Efficient Benchmarking for Hyperparameter Optimization in Reinforcement Learning [42.33815055388433]
ARLBench is a benchmark for hyperparameter optimization (HPO) in reinforcement learning (RL)
It allows comparisons of diverse HPO approaches while being highly efficient in evaluation.
ARLBench is an efficient, flexible, and future-oriented foundation for research on AutoRL.
arXiv Detail & Related papers (2024-09-27T15:22:28Z) - Adaptive $Q$-Network: On-the-fly Target Selection for Deep Reinforcement Learning [18.579378919155864]
We propose Adaptive $Q$Network (AdaQN) to take into account the non-stationarity of the optimization procedure without requiring additional samples.
AdaQN is theoretically sound and empirically validate it in MuJoCo control problems and Atari $2600 games.
arXiv Detail & Related papers (2024-05-25T11:57:43Z) - Maximize to Explore: One Objective Function Fusing Estimation, Planning,
and Exploration [87.53543137162488]
We propose an easy-to-implement online reinforcement learning (online RL) framework called textttMEX.
textttMEX integrates estimation and planning components while balancing exploration exploitation automatically.
It can outperform baselines by a stable margin in various MuJoCo environments with sparse rewards.
arXiv Detail & Related papers (2023-05-29T17:25:26Z) - AutoRL Hyperparameter Landscapes [69.15927869840918]
Reinforcement Learning (RL) has shown to be capable of producing impressive results, but its use is limited by the impact of its hyperparameters on performance.
We propose an approach to build and analyze these hyperparameter landscapes not just for one point in time but at multiple points in time throughout training.
This supports the theory that hyperparameters should be dynamically adjusted during training and shows the potential for more insights on AutoRL problems that can be gained through landscape analyses.
arXiv Detail & Related papers (2023-04-05T12:14:41Z) - Path Planning using Reinforcement Learning: A Policy Iteration Approach [0.0]
This research aims to shed light on the design space exploration associated with reinforcement learning parameters.
We propose an auto-tuner-based ordinal regression approach to accelerate the process of exploring these parameters.
Our approach provides 1.82x peak speedup with an average of 1.48x speedup over the previous state-of-the-art.
arXiv Detail & Related papers (2023-03-13T23:44:40Z) - Offline Policy Optimization in RL with Variance Regularizaton [142.87345258222942]
We propose variance regularization for offline RL algorithms, using stationary distribution corrections.
We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer.
The proposed algorithm for offline variance regularization (OVAR) can be used to augment any existing offline policy optimization algorithms.
arXiv Detail & Related papers (2022-12-29T18:25:01Z) - Automatic tuning of hyper-parameters of reinforcement learning
algorithms using Bayesian optimization with behavioral cloning [0.0]
In reinforcement learning (RL), the information content of data gathered by the learning agent is dependent on the setting of many hyper- parameters.
In this work, a novel approach for autonomous hyper- parameter setting using Bayesian optimization is proposed.
Experiments reveal promising results compared to other manual tweaking and optimization-based approaches.
arXiv Detail & Related papers (2021-12-15T13:10:44Z) - Online hyperparameter optimization by real-time recurrent learning [57.01871583756586]
Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in neural networks (RNNs)
It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously.
This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
arXiv Detail & Related papers (2021-02-15T19:36:18Z) - Cost-Efficient Online Hyperparameter Optimization [94.60924644778558]
We propose an online HPO algorithm that reaches human expert-level performance within a single run of the experiment.
Our proposed online HPO algorithm reaches human expert-level performance within a single run of the experiment, while incurring only modest computational overhead compared to regular training.
arXiv Detail & Related papers (2021-01-17T04:55:30Z) - Sample-Efficient Automated Deep Reinforcement Learning [33.53903358611521]
We propose a population-based automated RL framework to meta-optimize arbitrary off-policy RL algorithms.
By sharing the collected experience across the population, we substantially increase the sample efficiency of the meta-optimization.
We demonstrate the capabilities of our sample-efficient AutoRL approach in a case study with the popular TD3 algorithm in the MuJoCo benchmark suite.
arXiv Detail & Related papers (2020-09-03T10:04:06Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.