Understanding Reinforcement Learning Algorithms: The Progress from Basic
Q-learning to Proximal Policy Optimization
- URL: http://arxiv.org/abs/2304.00026v1
- Date: Fri, 31 Mar 2023 17:24:51 GMT
- Title: Understanding Reinforcement Learning Algorithms: The Progress from Basic
Q-learning to Proximal Policy Optimization
- Authors: Mohamed-Amine Chadi and Hajar Mousannif
- Abstract summary: reinforcement learning (RL) has a unique setting, jargon, and mathematics that can be intimidating for those new to the field or artificial intelligence.
This paper provides a clear and concise overview of the fundamental principles of RL and covers the different types of RL algorithms.
The presentation of the paper is aligned with the historical progress of the field, from the early 1980s Q-learning algorithm to the current state-of-the-art algorithms such as TD3, PPO, and offline RL.
- Score: 0.6091702876917281
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a review of the field of reinforcement learning (RL),
with a focus on providing a comprehensive overview of the key concepts,
techniques, and algorithms for beginners. RL has a unique setting, jargon, and
mathematics that can be intimidating for those new to the field or artificial
intelligence more broadly. While many papers review RL in the context of
specific applications, such as games, healthcare, finance, or robotics, these
papers can be difficult for beginners to follow due to the inclusion of
non-RL-related work and the use of algorithms customized to those specific
applications. To address these challenges, this paper provides a clear and
concise overview of the fundamental principles of RL and covers the different
types of RL algorithms. For each algorithm/method, we outline the main
motivation behind its development, its inner workings, and its limitations. The
presentation of the paper is aligned with the historical progress of the field,
from the early 1980s Q-learning algorithm to the current state-of-the-art
algorithms such as TD3, PPO, and offline RL. Overall, this paper aims to serve
as a valuable resource for beginners looking to construct a solid understanding
of the fundamentals of RL and be aware of the historical progress of the field.
It is intended to be a go-to reference for those interested in learning about
RL without being distracted by the details of specific applications.
Related papers
- An Introduction to Reinforcement Learning: Fundamental Concepts and Practical Applications [3.1699526199304007]
Reinforcement Learning (RL) is a branch of Artificial Intelligence (AI) which focuses on training agents to make decisions by interacting with their environment to maximize cumulative rewards.
An overview of RL is provided in this paper, which discusses its core concepts, methodologies, recent trends, and resources for learning.
arXiv Detail & Related papers (2024-08-13T23:08:06Z) - A Survey of Meta-Reinforcement Learning [69.76165430793571]
We cast the development of better RL algorithms as a machine learning problem itself in a process called meta-RL.
We discuss how, at a high level, meta-RL research can be clustered based on the presence of a task distribution and the learning budget available for each individual task.
We conclude by presenting the open problems on the path to making meta-RL part of the standard toolbox for a deep RL practitioner.
arXiv Detail & Related papers (2023-01-19T12:01:41Z) - Contrastive Learning as Goal-Conditioned Reinforcement Learning [147.28638631734486]
In reinforcement learning (RL), it is easier to solve a task if given a good representation.
While deep RL should automatically acquire such good representations, prior work often finds that learning representations in an end-to-end fashion is unstable.
We show (contrastive) representation learning methods can be cast as RL algorithms in their own right.
arXiv Detail & Related papers (2022-06-15T14:34:15Z) - Jump-Start Reinforcement Learning [68.82380421479675]
We present a meta algorithm that can use offline data, demonstrations, or a pre-existing policy to initialize an RL policy.
In particular, we propose Jump-Start Reinforcement Learning (JSRL), an algorithm that employs two policies to solve tasks.
We show via experiments that JSRL is able to significantly outperform existing imitation and reinforcement learning algorithms.
arXiv Detail & Related papers (2022-04-05T17:25:22Z) - Text Generation with Efficient (Soft) Q-Learning [91.47743595382758]
Reinforcement learning (RL) offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward.
We introduce a new RL formulation for text generation from the soft Q-learning perspective.
We apply the approach to a wide range of tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation.
arXiv Detail & Related papers (2021-06-14T18:48:40Z) - Heuristic-Guided Reinforcement Learning [31.056460162389783]
Tabula rasa RL algorithms require environment interactions or computation that scales with the horizon of the decision-making task.
Our framework can be viewed as a horizon-based regularization for controlling bias and variance in RL under a finite interaction budget.
In particular, we introduce the novel concept of an "improvable" -- a that allows an RL agent to extrapolate beyond its prior knowledge.
arXiv Detail & Related papers (2021-06-05T00:04:09Z) - RL-DARTS: Differentiable Architecture Search for Reinforcement Learning [62.95469460505922]
We introduce RL-DARTS, one of the first applications of Differentiable Architecture Search (DARTS) in reinforcement learning (RL)
By replacing the image encoder with a DARTS supernet, our search method is sample-efficient, requires minimal extra compute resources, and is also compatible with off-policy and on-policy RL algorithms, needing only minor changes in preexisting code.
We show that the supernet gradually learns better cells, leading to alternative architectures which can be highly competitive against manually designed policies, but also verify previous design choices for RL policies.
arXiv Detail & Related papers (2021-06-04T03:08:43Z) - How to Make Deep RL Work in Practice [15.740760669623876]
Reported results of state-of-the-art algorithms are often difficult to reproduce.
We make suggestions which of those techniques to use by default and highlight areas that could benefit from a solution specifically tailored to RL.
arXiv Detail & Related papers (2020-10-25T10:37:54Z) - Discovering Reinforcement Learning Algorithms [53.72358280495428]
Reinforcement learning algorithms update an agent's parameters according to one of several possible rules.
This paper introduces a new meta-learning approach that discovers an entire update rule.
It includes both 'what to predict' (e.g. value functions) and 'how to learn from it' by interacting with a set of environments.
arXiv Detail & Related papers (2020-07-17T07:38:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.