Preventing Value Function Collapse in Ensemble {Q}-Learning by
Maximizing Representation Diversity
- URL: http://arxiv.org/abs/2006.13823v3
- Date: Fri, 21 Jan 2022 06:14:31 GMT
- Title: Preventing Value Function Collapse in Ensemble {Q}-Learning by
Maximizing Representation Diversity
- Authors: Hassam Ullah Sheikh, Ladislau B\"ol\"oni
- Abstract summary: Maxmin and Ensemble Q-learning algorithms have used different estimates provided by the ensembles of learners to reduce the overestimation bias.
Unfortunately, these learners can converge to the same point in the parametric or representation space, falling back to the classic single neural network DQN.
We propose and compare five regularization functions inspired from economics theory and consensus optimization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The classic DQN algorithm is limited by the overestimation bias of the
learned Q-function. Subsequent algorithms have proposed techniques to reduce
this problem, without fully eliminating it. Recently, the Maxmin and Ensemble
Q-learning algorithms have used different estimates provided by the ensembles
of learners to reduce the overestimation bias. Unfortunately, these learners
can converge to the same point in the parametric or representation space,
falling back to the classic single neural network DQN. In this paper, we
describe a regularization technique to maximize ensemble diversity in these
algorithms. We propose and compare five regularization functions inspired from
economics theory and consensus optimization. We show that the regularized
approach significantly outperforms the Maxmin and Ensemble Q-learning
algorithms as well as non-ensemble baselines.
Related papers
- Double Successive Over-Relaxation Q-Learning with an Extension to Deep Reinforcement Learning [0.0]
Successive Over-Relaxation (SOR) Q-learning, which introduces a relaxation factor to speed up convergence, has two major limitations.
We propose a sample-based, model-free double SOR Q-learning algorithm.
The proposed algorithm is extended to large-scale problems using deep RL.
arXiv Detail & Related papers (2024-09-10T09:23:03Z) - Two-Step Q-Learning [0.0]
The paper proposes a novel off-policy two-step Q-learning algorithm, without importance sampling.
Numerical experiments demonstrate the superior performance of both the two-step Q-learning and its smooth variants.
arXiv Detail & Related papers (2024-07-02T15:39:00Z) - Multi-Timescale Ensemble Q-learning for Markov Decision Process Policy
Optimization [21.30645601474163]
Original Q-learning suffers from performance and complexity challenges across very large networks.
New model-free ensemble reinforcement learning algorithm which adapts the classical Q-learning is proposed to handle these challenges.
Numerical results show that the proposed algorithm can achieve up to 55% less average policy error with up to 50% less runtime complexity.
arXiv Detail & Related papers (2024-02-08T08:08:23Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - On the Convergence of Distributed Stochastic Bilevel Optimization
Algorithms over a Network [55.56019538079826]
Bilevel optimization has been applied to a wide variety of machine learning models.
Most existing algorithms restrict their single-machine setting so that they are incapable of handling distributed data.
We develop novel decentralized bilevel optimization algorithms based on a gradient tracking communication mechanism and two different gradients.
arXiv Detail & Related papers (2022-06-30T05:29:52Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - Hamilton-Jacobi Deep Q-Learning for Deterministic Continuous-Time
Systems with Lipschitz Continuous Controls [2.922007656878633]
We propose Q-learning algorithms for continuous-time deterministic optimal control problems with Lipschitz continuous controls.
A novel semi-discrete version of the HJB equation is proposed to design a Q-learning algorithm that uses data collected in discrete time without discretizing or approximating the system dynamics.
arXiv Detail & Related papers (2020-10-27T06:11:04Z) - Logistic Q-Learning [87.00813469969167]
We propose a new reinforcement learning algorithm derived from a regularized linear-programming formulation of optimal control in MDPs.
The main feature of our algorithm is a convex loss function for policy evaluation that serves as a theoretically sound alternative to the widely used squared Bellman error.
arXiv Detail & Related papers (2020-10-21T17:14:31Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Maxmin Q-learning: Controlling the Estimation Bias of Q-learning [31.742397178618624]
Overestimation bias affects Q-learning because it approximates the maximum action value using the maximum estimated action value.
We propose a generalization of Q-learning, called emphMaxmin Q-learning, which provides a parameter to flexibly control bias.
We empirically verify that our algorithm better controls estimation bias in toy environments, and that it achieves superior performance on several benchmark problems.
arXiv Detail & Related papers (2020-02-16T02:02:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.