Adaptive Regularization of Representation Rank as an Implicit Constraint of Bellman Equation
- URL: http://arxiv.org/abs/2404.12754v1
- Date: Fri, 19 Apr 2024 10:00:34 GMT
- Title: Adaptive Regularization of Representation Rank as an Implicit Constraint of Bellman Equation
- Authors: Qiang He, Tianyi Zhou, Meng Fang, Setareh Maghsudi,
- Abstract summary: We find a guiding principle for adaptive control of the representation rank.
We propose a novel regularizer, namely BEllman Equation-based automatic rank Regularizer (BEER)
BEER adaptively regularizes the representation rank, thus improving the DRL agent's performance.
- Score: 40.88431931273978
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Representation rank is an important concept for understanding the role of Neural Networks (NNs) in Deep Reinforcement learning (DRL), which measures the expressive capacity of value networks. Existing studies focus on unboundedly maximizing this rank; nevertheless, that approach would introduce overly complex models in the learning, thus undermining performance. Hence, fine-tuning representation rank presents a challenging and crucial optimization problem. To address this issue, we find a guiding principle for adaptive control of the representation rank. We employ the Bellman equation as a theoretical foundation and derive an upper bound on the cosine similarity of consecutive state-action pairs representations of value networks. We then leverage this upper bound to propose a novel regularizer, namely BEllman Equation-based automatic rank Regularizer (BEER). This regularizer adaptively regularizes the representation rank, thus improving the DRL agent's performance. We first validate the effectiveness of automatic control of rank on illustrative experiments. Then, we scale up BEER to complex continuous control tasks by combining it with the deterministic policy gradient method. Among 12 challenging DeepMind control tasks, BEER outperforms the baselines by a large margin. Besides, BEER demonstrates significant advantages in Q-value approximation. Our code is available at https://github.com/sweetice/BEER-ICLR2024.
Related papers
- On The Global Convergence Of Online RLHF With Neural Parametrization [36.239015146313136]
Reinforcement Learning from Human Feedback (RLHF) aims to align large language models with human values.
RLHF is a three-stage process that includes supervised fine-tuning, reward learning, and policy learning.
We propose a bi-level formulation for AI alignment in parameterized settings and introduce a first-order approach to solve this problem.
arXiv Detail & Related papers (2024-10-21T03:13:35Z) - REBEL: Reward Regularization-Based Approach for Robotic Reinforcement Learning from Human Feedback [61.54791065013767]
A misalignment between the reward function and human preferences can lead to catastrophic outcomes in the real world.
Recent methods aim to mitigate misalignment by learning reward functions from human preferences.
We propose a novel concept of reward regularization within the robotic RLHF framework.
arXiv Detail & Related papers (2023-12-22T04:56:37Z) - Robust Low-Rank Matrix Completion via a New Sparsity-Inducing
Regularizer [30.920908325825668]
This paper presents a novel loss function to as hybrid ordinary-Welsch (HOW) and a new sparsity-inducing matrix problem solver.
arXiv Detail & Related papers (2023-10-07T09:47:55Z) - Understanding, Predicting and Better Resolving Q-Value Divergence in
Offline-RL [86.0987896274354]
We first identify a fundamental pattern, self-excitation, as the primary cause of Q-value estimation divergence in offline RL.
We then propose a novel Self-Excite Eigenvalue Measure (SEEM) metric to measure the evolving property of Q-network at training.
For the first time, our theory can reliably decide whether the training will diverge at an early stage.
arXiv Detail & Related papers (2023-10-06T17:57:44Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - Forward and inverse reinforcement learning sharing network weights and
hyperparameters [3.705785916791345]
ERIL combines forward and inverse reinforcement learning (RL) under the framework of an entropy-regularized Markov decision process.
A forward RL step minimizes the reverse KL estimated by the inverse RL step.
We show that minimizing the reverse KL divergence is equivalent to finding an optimal policy.
arXiv Detail & Related papers (2020-08-17T13:12:44Z) - Stochastic Batch Augmentation with An Effective Distilled Dynamic Soft
Label Regularizer [11.153892464618545]
We propose a framework called Batch Augmentation safety of generalization (SBA) to address these problems.
SBA decides whether to augment at iterations controlled by the batch scheduler and in which a ''distilled'' dynamic soft regularization is introduced.
Our experiments on CIFAR-10, CIFAR-100, and ImageNet show that SBA can improve the generalization of the neural networks and speed up the convergence of network training.
arXiv Detail & Related papers (2020-06-27T04:46:39Z) - An Information Bottleneck Approach for Controlling Conciseness in
Rationale Extraction [84.49035467829819]
We show that it is possible to better manage this trade-off by optimizing a bound on the Information Bottleneck (IB) objective.
Our fully unsupervised approach jointly learns an explainer that predicts sparse binary masks over sentences, and an end-task predictor that considers only the extracted rationale.
arXiv Detail & Related papers (2020-05-01T23:26:41Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.