Extreme Q-Learning: MaxEnt RL without Entropy
- URL: http://arxiv.org/abs/2301.02328v1
- Date: Thu, 5 Jan 2023 23:14:38 GMT
- Title: Extreme Q-Learning: MaxEnt RL without Entropy
- Authors: Divyansh Garg, Joey Hejna, Matthieu Geist, Stefano Ermon
- Abstract summary: Modern Deep Reinforcement Learning (RL) algorithms require estimates of the maximal Q-value, which are difficult to compute in continuous domains.
We introduce a new update rule for online and offline RL which directly models the maximal value using Extreme Value Theory (EVT)
Using EVT, we derive our Extreme Q-Learning framework and consequently online and, for the first time, offline MaxEnt Q-learning algorithms.
- Score: 88.97516083146371
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern Deep Reinforcement Learning (RL) algorithms require estimates of the
maximal Q-value, which are difficult to compute in continuous domains with an
infinite number of possible actions. In this work, we introduce a new update
rule for online and offline RL which directly models the maximal value using
Extreme Value Theory (EVT), drawing inspiration from Economics. By doing so, we
avoid computing Q-values using out-of-distribution actions which is often a
substantial source of error. Our key insight is to introduce an objective that
directly estimates the optimal soft-value functions (LogSumExp) in the maximum
entropy RL setting without needing to sample from a policy. Using EVT, we
derive our Extreme Q-Learning framework and consequently online and, for the
first time, offline MaxEnt Q-learning algorithms, that do not explicitly
require access to a policy or its entropy. Our method obtains consistently
strong performance in the D4RL benchmark, outperforming prior works by 10+
points on some tasks while offering moderate improvements over SAC and TD3 on
online DM Control tasks.
Related papers
- UDQL: Bridging The Gap between MSE Loss and The Optimal Value Function in Offline Reinforcement Learning [10.593924216046977]
We first theoretically analyze overestimation phenomenon led by MSE and provide the theoretical upper bound of the overestimated error.
At last, we propose the offline RL algorithm based on underestimated operator and diffusion policy model.
arXiv Detail & Related papers (2024-06-05T14:37:42Z) - Action-Quantized Offline Reinforcement Learning for Robotic Skill
Learning [68.16998247593209]
offline reinforcement learning (RL) paradigm provides recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data.
In this paper, we propose an adaptive scheme for action quantization.
We show that several state-of-the-art offline RL methods such as IQL, CQL, and BRAC improve in performance on benchmarks when combined with our proposed discretization scheme.
arXiv Detail & Related papers (2023-10-18T06:07:10Z) - Mildly Conservative Q-Learning for Offline Reinforcement Learning [63.2183622958666]
offline reinforcement learning (RL) defines the task of learning from a static logged dataset without continually interacting with the environment.
Existing approaches, penalizing the unseen actions or regularizing with the behavior policy, are too pessimistic.
We propose Mildly Conservative Q-learning (MCQ), where OOD actions are actively trained by assigning them proper pseudo Q values.
arXiv Detail & Related papers (2022-06-09T19:44:35Z) - Offline Reinforcement Learning with Implicit Q-Learning [85.62618088890787]
Current offline reinforcement learning methods need to query the value of unseen actions during training to improve the policy.
We propose an offline RL method that never needs to evaluate actions outside of the dataset.
This method enables the learned policy to improve substantially over the best behavior in the data through generalization.
arXiv Detail & Related papers (2021-10-12T17:05:05Z) - EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline
and Online RL [48.552287941528]
Off-policy reinforcement learning holds the promise of sample-efficient learning of decision-making policies.
In the offline RL setting, standard off-policy RL methods can significantly underperform.
We introduce Expected-Max Q-Learning (EMaQ), which is more closely related to the resulting practical algorithm.
arXiv Detail & Related papers (2020-07-21T21:13:02Z) - Conservative Q-Learning for Offline Reinforcement Learning [106.05582605650932]
We show that CQL substantially outperforms existing offline RL methods, often learning policies that attain 2-5 times higher final return.
We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees.
arXiv Detail & Related papers (2020-06-08T17:53:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.