Towards model-free RL algorithms that scale well with unstructured data
- URL: http://arxiv.org/abs/2311.02215v1
- Date: Fri, 3 Nov 2023 20:03:54 GMT
- Title: Towards model-free RL algorithms that scale well with unstructured data
- Authors: Joseph Modayil and Zaheer Abbas
- Abstract summary: We provide an algorithm that constructs reward-relevant general value function questions to find and exploit predictive structure directly from the experience stream.
The proposed algorithm reliably outperforms a conventional deep RL algorithm on these scaling problems.
- Score: 1.3799571823220778
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventional reinforcement learning (RL) algorithms exhibit broad generality
in their theoretical formulation and high performance on several challenging
domains when combined with powerful function approximation. However, developing
RL algorithms that perform well across problems with unstructured observations
at scale remains challenging because most function approximation methods rely
on externally provisioned knowledge about the structure of the input for good
performance (e.g. convolutional networks, graph neural networks, tile-coding).
A common practice in RL is to evaluate algorithms on a single problem, or on
problems with limited variation in the observation scale. RL practitioners lack
a systematic way to study how well a single RL algorithm performs when
instantiated across a range of problem scales, and they lack function
approximation techniques that scale well with unstructured observations.
We address these limitations by providing environments and algorithms to
study scaling for unstructured observation vectors and flat action spaces. We
introduce a family of combinatorial RL problems with an exponentially large
state space and high-dimensional dynamics but where linear computation is
sufficient to learn a (nonlinear) value function estimate for performant
control. We provide an algorithm that constructs reward-relevant general value
function (GVF) questions to find and exploit predictive structure directly from
the experience stream. In an empirical evaluation of the approach on synthetic
problems, we observe a sample complexity that scales linearly with the
observation size. The proposed algorithm reliably outperforms a conventional
deep RL algorithm on these scaling problems, and they exhibit several desirable
auxiliary properties. These results suggest new algorithmic mechanisms by which
algorithms can learn at scale from unstructured data.
Related papers
- Gauge-optimal approximate learning for small data classification
problems [0.0]
Small data learning problems are characterized by a discrepancy between the limited amount of response variable observations and the large feature space dimension.
We propose the Gauge- Optimal Approximate Learning (GOAL) algorithm, which provides an analytically tractable joint solution to the reduction dimension, feature segmentation and classification problems.
GOAL has been compared to other state-of-the-art machine learning (ML) tools on both synthetic data and challenging real-world applications from climate science and bioinformatics.
arXiv Detail & Related papers (2023-10-29T16:46:05Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Adaptive Discretization in Online Reinforcement Learning [9.560980936110234]
Two major questions in designing discretization-based algorithms are how to create the discretization and when to refine it.
We provide a unified theoretical analysis of tree-based hierarchical partitioning methods for online reinforcement learning.
Our algorithms are easily adapted to operating constraints, and our theory provides explicit bounds across each of the three facets.
arXiv Detail & Related papers (2021-10-29T15:06:15Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Accelerating Recursive Partition-Based Causal Structure Learning [4.357523892518871]
Recursive causal discovery algorithms provide good results by using Conditional Independent (CI) tests in smaller sub-problems.
This paper proposes a generic causal structure refinement strategy that can locate the undesired relations with a small number of CI-tests.
We then empirically evaluate its performance against the state-of-the-art algorithms in terms of solution quality and completion time in synthetic and real datasets.
arXiv Detail & Related papers (2021-02-23T08:28:55Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.