Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning
- URL: http://arxiv.org/abs/2304.01203v7
- Date: Sun, 26 Nov 2023 19:44:54 GMT
- Title: Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning
- Authors: Tongzhou Wang, Antonio Torralba, Phillip Isola, Amy Zhang
- Abstract summary: Quasimetric Reinforcement Learning (QRL) is a new RL method that utilizes quasimetric models to learn optimal value functions.
On offline and online goal-reaching benchmarks, QRL also demonstrates improved sample efficiency and performance.
- Score: 73.80728148866906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In goal-reaching reinforcement learning (RL), the optimal value function has
a particular geometry, called quasimetric structure. This paper introduces
Quasimetric Reinforcement Learning (QRL), a new RL method that utilizes
quasimetric models to learn optimal value functions. Distinct from prior
approaches, the QRL objective is specifically designed for quasimetrics, and
provides strong theoretical recovery guarantees. Empirically, we conduct
thorough analyses on a discretized MountainCar environment, identifying
properties of QRL and its advantages over alternatives. On offline and online
goal-reaching benchmarks, QRL also demonstrates improved sample efficiency and
performance, across both state-based and image-based observations.
Related papers
- EdgeRL: Reinforcement Learning-driven Deep Learning Model Inference Optimization at Edge [2.8946323553477704]
We propose EdgeRL framework that seeks to strike balance by using an Advantage Actor-Critic (A2C) Reinforcement Learning (RL) approach.
We evaluate the benefits of EdgeRL framework in terms of end device energy savings, inference accuracy improvement, and end-to-end inference latency reduction.
arXiv Detail & Related papers (2024-10-16T04:31:39Z) - Q-value Regularized Transformer for Offline Reinforcement Learning [70.13643741130899]
We propose a Q-value regularized Transformer (QT) to enhance the state-of-the-art in offline reinforcement learning (RL)
QT learns an action-value function and integrates a term maximizing action-values into the training loss of Conditional Sequence Modeling (CSM)
Empirical evaluations on D4RL benchmark datasets demonstrate the superiority of QT over traditional DP and CSM methods.
arXiv Detail & Related papers (2024-05-27T12:12:39Z) - Learning Goal-Conditioned Policies from Sub-Optimal Offline Data via Metric Learning [22.174803826742963]
We address the problem of learning optimal behavior from sub-optimal datasets for goal-conditioned offline reinforcement learning.
We propose the use of metric learning to approximate the optimal value function for goal-conditioned offline RL problems.
We show that our method estimates optimal behaviors from severely sub-optimal offline datasets without suffering from out-of-distribution estimation errors.
arXiv Detail & Related papers (2024-02-16T16:46:53Z) - Provable Reward-Agnostic Preference-Based Reinforcement Learning [61.39541986848391]
Preference-based Reinforcement Learning (PbRL) is a paradigm in which an RL agent learns to optimize a task using pair-wise preference-based feedback over trajectories.
We propose a theoretical reward-agnostic PbRL framework where exploratory trajectories that enable accurate learning of hidden reward functions are acquired.
arXiv Detail & Related papers (2023-05-29T15:00:09Z) - Deep Black-Box Reinforcement Learning with Movement Primitives [15.184283143878488]
We present a new algorithm for deep reinforcement learning (RL)
It is based on differentiable trust region layers, a successful on-policy deep RL algorithm.
We compare our ERL algorithm to state-of-the-art step-based algorithms in many complex simulated robotic control tasks.
arXiv Detail & Related papers (2022-10-18T06:34:52Z) - Simplifying Model-based RL: Learning Representations, Latent-space
Models, and Policies with One Objective [142.36200080384145]
We propose a single objective which jointly optimize a latent-space model and policy to achieve high returns while remaining self-consistent.
We demonstrate that the resulting algorithm matches or improves the sample-efficiency of the best prior model-based and model-free RL methods.
arXiv Detail & Related papers (2022-09-18T03:51:58Z) - Metric Residual Networks for Sample Efficient Goal-conditioned
Reinforcement Learning [52.59242013527014]
Goal-conditioned reinforcement learning (GCRL) has a wide range of potential real-world applications.
Sample efficiency is of utmost importance for GCRL since, by default, the agent is only rewarded when it reaches its goal.
We introduce a novel neural architecture for GCRL that achieves significantly better sample efficiency than the commonly-used monolithic network architecture.
arXiv Detail & Related papers (2022-08-17T08:04:41Z) - Learning to Prune Deep Neural Networks via Reinforcement Learning [64.85939668308966]
PuRL is a deep reinforcement learning based algorithm for pruning neural networks.
It achieves sparsity and accuracy comparable to current state-of-the-art methods.
arXiv Detail & Related papers (2020-07-09T13:06:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.