Maximize to Explore: One Objective Function Fusing Estimation, Planning,
and Exploration
- URL: http://arxiv.org/abs/2305.18258v2
- Date: Wed, 25 Oct 2023 14:59:32 GMT
- Title: Maximize to Explore: One Objective Function Fusing Estimation, Planning,
and Exploration
- Authors: Zhihan Liu, Miao Lu, Wei Xiong, Han Zhong, Hao Hu, Shenao Zhang, Sirui
Zheng, Zhuoran Yang, Zhaoran Wang
- Abstract summary: We propose an easy-to-implement online reinforcement learning (online RL) framework called textttMEX.
textttMEX integrates estimation and planning components while balancing exploration exploitation automatically.
It can outperform baselines by a stable margin in various MuJoCo environments with sparse rewards.
- Score: 87.53543137162488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In online reinforcement learning (online RL), balancing exploration and
exploitation is crucial for finding an optimal policy in a sample-efficient
way. To achieve this, existing sample-efficient online RL algorithms typically
consist of three components: estimation, planning, and exploration. However, in
order to cope with general function approximators, most of them involve
impractical algorithmic components to incentivize exploration, such as
optimization within data-dependent level-sets or complicated sampling
procedures. To address this challenge, we propose an easy-to-implement RL
framework called \textit{Maximize to Explore} (\texttt{MEX}), which only needs
to optimize \emph{unconstrainedly} a single objective that integrates the
estimation and planning components while balancing exploration and exploitation
automatically. Theoretically, we prove that \texttt{MEX} achieves a sublinear
regret with general function approximations for Markov decision processes (MDP)
and is further extendable to two-player zero-sum Markov games (MG). Meanwhile,
we adapt deep RL baselines to design practical versions of \texttt{MEX}, in
both model-free and model-based manners, which can outperform baselines by a
stable margin in various MuJoCo environments with sparse rewards. Compared with
existing sample-efficient online RL algorithms with general function
approximations, \texttt{MEX} achieves similar sample efficiency while enjoying
a lower computational cost and is more compatible with modern deep RL methods.
Related papers
- Offline RL via Feature-Occupancy Gradient Ascent [9.983014605039658]
We study offline Reinforcement Learning in large infinite-horizon discounted Markov Decision Processes (MDPs)
We develop a new algorithm that performs a form of gradient ascent in the space of feature occupancies.
We show that the resulting simple algorithm satisfies strong computational and sample complexity guarantees.
arXiv Detail & Related papers (2024-05-22T15:39:05Z) - Efficient Multi-agent Reinforcement Learning by Planning [33.51282615335009]
Multi-agent reinforcement learning (MARL) algorithms have accomplished remarkable breakthroughs in solving large-scale decision-making tasks.
Most existing MARL algorithms are model-free, limiting sample efficiency and hindering their applicability in more challenging scenarios.
We propose the MAZero algorithm, which combines a centralized model with Monte Carlo Tree Search (MCTS) for policy search.
arXiv Detail & Related papers (2024-05-20T04:36:02Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Regularization and Variance-Weighted Regression Achieves Minimax
Optimality in Linear MDPs: Theory and Practice [79.48432795639403]
Mirror descent value iteration (MDVI) is an abstraction of Kullback-Leibler (KL) and entropy-regularized reinforcement learning (RL)
We study MDVI with linear function approximation through its sample complexity required to identify an $varepsilon$-optimal policy.
We present Variance-Weighted Least-Squares MDVI, the first theoretical algorithm that achieves nearly minimax optimal sample complexity for infinite-horizon linear MDPs.
arXiv Detail & Related papers (2023-05-22T16:13:05Z) - Nearly Minimax Optimal Offline Reinforcement Learning with Linear
Function Approximation: Single-Agent MDP and Markov Game [34.69723238900705]
offline reinforcement learning (RL) aims at learning an optimal strategy using a pre-collected dataset without further interactions with the environment.
We propose two new algorithms for offline single-agent MDPs and two-player zero-sum Markov games (MGs)
To the best of our knowledge, these are the first computationally efficient and nearly minimax optimal algorithms for offline single-agent MDPs and MGs with linear function approximation.
arXiv Detail & Related papers (2022-05-31T02:50:17Z) - Towards Deployment-Efficient Reinforcement Learning: Lower Bound and
Optimality [141.89413461337324]
Deployment efficiency is an important criterion for many real-world applications of reinforcement learning (RL)
We propose a theoretical formulation for deployment-efficient RL (DE-RL) from an "optimization with constraints" perspective.
arXiv Detail & Related papers (2022-02-14T01:31:46Z) - Reward-Free RL is No Harder Than Reward-Aware RL in Linear Markov
Decision Processes [61.11090361892306]
Reward-free reinforcement learning (RL) considers the setting where the agent does not have access to a reward function during exploration.
We show that this separation does not exist in the setting of linear MDPs.
We develop a computationally efficient algorithm for reward-free RL in a $d$-dimensional linear MDP.
arXiv Detail & Related papers (2022-01-26T22:09:59Z) - On Reward-Free RL with Kernel and Neural Function Approximations:
Single-Agent MDP and Markov Game [140.19656665344917]
We study the reward-free RL problem, where an agent aims to thoroughly explore the environment without any pre-specified reward function.
We tackle this problem under the context of function approximation, leveraging powerful function approximators.
We establish the first provably efficient reward-free RL algorithm with kernel and neural function approximators.
arXiv Detail & Related papers (2021-10-19T07:26:33Z) - MADE: Exploration via Maximizing Deviation from Explored Regions [48.49228309729319]
In online reinforcement learning (RL), efficient exploration remains challenging in high-dimensional environments with sparse rewards.
We propose a new exploration approach via textitmaximizing the deviation of the occupancy of the next policy from the explored regions.
Our approach significantly improves sample efficiency over state-of-the-art methods.
arXiv Detail & Related papers (2021-06-18T17:57:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.