Adaptive trajectory-constrained exploration strategy for deep
reinforcement learning
- URL: http://arxiv.org/abs/2312.16456v1
- Date: Wed, 27 Dec 2023 07:57:15 GMT
- Title: Adaptive trajectory-constrained exploration strategy for deep
reinforcement learning
- Authors: Guojian Wang, Faguo Wu, Xiao Zhang, Ning Guo, Zhiming Zheng
- Abstract summary: Deep reinforcement learning (DRL) faces significant challenges in addressing the hard-exploration problems in tasks with sparse or deceptive rewards and large state spaces.
We propose an efficient adaptive trajectory-constrained exploration strategy for DRL.
We conduct experiments on two large 2D grid world mazes and several MuJoCo tasks.
- Score: 6.589742080994319
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep reinforcement learning (DRL) faces significant challenges in addressing
the hard-exploration problems in tasks with sparse or deceptive rewards and
large state spaces. These challenges severely limit the practical application
of DRL. Most previous exploration methods relied on complex architectures to
estimate state novelty or introduced sensitive hyperparameters, resulting in
instability. To mitigate these issues, we propose an efficient adaptive
trajectory-constrained exploration strategy for DRL. The proposed method guides
the policy of the agent away from suboptimal solutions by leveraging incomplete
offline demonstrations as references. This approach gradually expands the
exploration scope of the agent and strives for optimality in a constrained
optimization manner. Additionally, we introduce a novel policy-gradient-based
optimization algorithm that utilizes adaptively clipped trajectory-distance
rewards for both single- and multi-agent reinforcement learning. We provide a
theoretical analysis of our method, including a deduction of the worst-case
approximation error bounds, highlighting the validity of our approach for
enhancing exploration. To evaluate the effectiveness of the proposed method, we
conducted experiments on two large 2D grid world mazes and several MuJoCo
tasks. The extensive experimental results demonstrate the significant
advantages of our method in achieving temporally extended exploration and
avoiding myopic and suboptimal behaviors in both single- and multi-agent
settings. Notably, the specific metrics and quantifiable results further
support these findings. The code used in the study is available at
\url{https://github.com/buaawgj/TACE}.
Related papers
- Action abstractions for amortized sampling [49.384037138511246]
We propose an approach to incorporate the discovery of action abstractions, or high-level actions, into the policy optimization process.
Our approach involves iteratively extracting action subsequences commonly used across many high-reward trajectories and chunking' them into a single action that is added to the action space.
arXiv Detail & Related papers (2024-10-19T19:22:50Z) - Provably Efficient Exploration in Inverse Constrained Reinforcement Learning [12.178081346315523]
Inverse Constrained Reinforcement Learning seeks to recover constraints from expert demonstrations in a data-driven manner.
We introduce a strategic exploration framework with guaranteed efficiency.
Motivated by our findings, we propose two exploratory algorithms to achieve efficient constraint inference.
arXiv Detail & Related papers (2024-09-24T10:48:13Z) - Preference-Guided Reinforcement Learning for Efficient Exploration [7.83845308102632]
We introduce LOPE: Learning Online with trajectory Preference guidancE, an end-to-end preference-guided RL framework.
Our intuition is that LOPE directly adjusts the focus of online exploration by considering human feedback as guidance.
LOPE outperforms several state-of-the-art methods regarding convergence rate and overall performance.
arXiv Detail & Related papers (2024-07-09T02:11:12Z) - Trajectory-Oriented Policy Optimization with Sparse Rewards [2.9602904918952695]
We introduce an approach leveraging offline demonstration trajectories for swifter and more efficient online RL in environments with sparse rewards.
Our pivotal insight involves treating offline demonstration trajectories as guidance, rather than mere imitation.
We then illustrate that this optimization problem can be streamlined into a policy-gradient algorithm, integrating rewards shaped by insights from offline demonstrations.
arXiv Detail & Related papers (2024-01-04T12:21:01Z) - Hyperparameter Optimization for Multi-Objective Reinforcement Learning [0.27309692684728615]
Reinforcement learning (RL) has emerged as a powerful approach for tackling complex problems.
The recent introduction of multi-objective reinforcement learning (MORL) has further expanded the scope of RL.
In practice, this task often proves to be challenging, leading to unsuccessful deployments of these techniques.
arXiv Detail & Related papers (2023-10-25T09:17:25Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Reparameterized Policy Learning for Multimodal Trajectory Optimization [61.13228961771765]
We investigate the challenge of parametrizing policies for reinforcement learning in high-dimensional continuous action spaces.
We propose a principled framework that models the continuous RL policy as a generative model of optimal trajectories.
We present a practical model-based RL method, which leverages the multimodal policy parameterization and learned world model.
arXiv Detail & Related papers (2023-07-20T09:05:46Z) - Online Control of Adaptive Large Neighborhood Search using Deep Reinforcement Learning [4.374837991804085]
We introduce a Deep Reinforcement Learning based approach called DR-ALNS that selects operators, adjusts parameters, and controls the acceptance criterion throughout the search.
We evaluate the proposed method on a problem with orienteering weights and time windows, as presented in an IJCAI competition.
The results show that our approach outperforms vanilla ALNS, ALNS tuned with Bayesian optimization, and two state-of-the-art DRL approaches.
arXiv Detail & Related papers (2022-11-01T21:33:46Z) - MADE: Exploration via Maximizing Deviation from Explored Regions [48.49228309729319]
In online reinforcement learning (RL), efficient exploration remains challenging in high-dimensional environments with sparse rewards.
We propose a new exploration approach via textitmaximizing the deviation of the occupancy of the next policy from the explored regions.
Our approach significantly improves sample efficiency over state-of-the-art methods.
arXiv Detail & Related papers (2021-06-18T17:57:00Z) - Reinforcement Learning for Low-Thrust Trajectory Design of
Interplanetary Missions [77.34726150561087]
This paper investigates the use of reinforcement learning for the robust design of interplanetary trajectories in presence of severe disturbances.
An open-source implementation of the state-of-the-art algorithm Proximal Policy Optimization is adopted.
The resulting Guidance and Control Network provides both a robust nominal trajectory and the associated closed-loop guidance law.
arXiv Detail & Related papers (2020-08-19T15:22:15Z) - Discrete Action On-Policy Learning with Action-Value Critic [72.20609919995086]
Reinforcement learning (RL) in discrete action space is ubiquitous in real-world applications, but its complexity grows exponentially with the action-space dimension.
We construct a critic to estimate action-value functions, apply it on correlated actions, and combine these critic estimated action values to control the variance of gradient estimation.
These efforts result in a new discrete action on-policy RL algorithm that empirically outperforms related on-policy algorithms relying on variance control techniques.
arXiv Detail & Related papers (2020-02-10T04:23:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.