Dealing with uncertainty: balancing exploration and exploitation in deep
recurrent reinforcement learning
- URL: http://arxiv.org/abs/2310.08331v2
- Date: Tue, 20 Feb 2024 09:11:42 GMT
- Title: Dealing with uncertainty: balancing exploration and exploitation in deep
recurrent reinforcement learning
- Authors: Valentina Zangirolami and Matteo Borrotti
- Abstract summary: Incomplete knowledge of the environment leads an agent to make decisions under uncertainty.
One of the major dilemmas in Reinforcement Learning (RL) where an autonomous agent has to balance two contrasting needs in making its decisions.
We show that adaptive methods better approximate the trade-off between exploration and exploitation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Incomplete knowledge of the environment leads an agent to make decisions
under uncertainty. One of the major dilemmas in Reinforcement Learning (RL)
where an autonomous agent has to balance two contrasting needs in making its
decisions is: exploiting the current knowledge of the environment to maximize
the cumulative reward as well as exploring actions that allow improving the
knowledge of the environment, hopefully leading to higher reward values
(exploration-exploitation trade-off). Concurrently, another relevant issue
regards the full observability of the states, which may not be assumed in all
applications. For instance, when 2D images are considered as input in an RL
approach used for finding the best actions within a 3D simulation environment.
In this work, we address these issues by deploying and testing several
techniques to balance exploration and exploitation trade-off on partially
observable systems for predicting steering wheels in autonomous driving
scenarios. More precisely, the final aim is to investigate the effects of using
both adaptive and deterministic exploration strategies coupled with a Deep
Recurrent Q-Network. Additionally, we adapted and evaluated the impact of a
modified quadratic loss function to improve the learning phase of the
underlying Convolutional Recurrent Neural Network. We show that adaptive
methods better approximate the trade-off between exploration and exploitation
and, in general, Softmax and Max-Boltzmann strategies outperform epsilon-greedy
techniques.
Related papers
- Towards Cost Sensitive Decision Making [14.279123976398926]
In this work, we consider RL models that may actively acquire features from the environment to improve the decision quality and certainty.
We propose the Active-Acquisition POMDP and identify two types of the acquisition process for different application domains.
In order to assist the agent in the actively-acquired partially-observed environment and alleviate the exploration-exploitation dilemma, we develop a model-based approach.
arXiv Detail & Related papers (2024-10-04T19:48:23Z) - No Regrets: Investigating and Improving Regret Approximations for Curriculum Discovery [53.08822154199948]
Unsupervised Environment Design (UED) methods have gained recent attention as their adaptive curricula promise to enable agents to be robust to in- and out-of-distribution tasks.
This work investigates how existing UED methods select training environments, focusing on task prioritisation metrics.
We develop a method that directly trains on scenarios with high learnability.
arXiv Detail & Related papers (2024-08-27T14:31:54Z) - The Exploration-Exploitation Dilemma Revisited: An Entropy Perspective [18.389232051345825]
In policy optimization, excessive reliance on exploration reduces learning efficiency, while over-dependence on exploitation might trap agents in local optima.
This paper revisits the exploration-exploitation dilemma from the perspective of entropy.
We establish an end-to-end adaptive framework called AdaZero, which automatically determines whether to explore or to exploit.
arXiv Detail & Related papers (2024-08-19T13:21:46Z) - Variable-Agnostic Causal Exploration for Reinforcement Learning [56.52768265734155]
We introduce a novel framework, Variable-Agnostic Causal Exploration for Reinforcement Learning (VACERL)
Our approach automatically identifies crucial observation-action steps associated with key variables using attention mechanisms.
It constructs the causal graph connecting these steps, which guides the agent towards observation-action pairs with greater causal influence on task completion.
arXiv Detail & Related papers (2024-07-17T09:45:27Z) - Ontology-Enhanced Decision-Making for Autonomous Agents in Dynamic and Partially Observable Environments [0.0]
This thesis introduces an ontology-enhanced decision-making model (OntoDeM) for autonomous agents.
OntoDeM enriches agents' domain knowledge, allowing them to interpret unforeseen events, generate or adapt goals, and make better decisions.
Compared to traditional and advanced learning algorithms, OntoDeM shows superior performance in improving agents' observations and decision-making in dynamic, partially observable environments.
arXiv Detail & Related papers (2024-05-27T22:52:23Z) - Uncertainty-Aware Decision Transformer for Stochastic Driving Environments [34.78461208843929]
We introduce an UNcertainty-awaRESion Transformer (UNREST) for planning in driving environments.
UNREST estimates uncertainties by conditional mutual information between transitions and returns.
We replace the global returns in decision transformers with truncated returns less affected by environments to learn from actual outcomes.
arXiv Detail & Related papers (2023-09-28T12:44:51Z) - CCE: Sample Efficient Sparse Reward Policy Learning for Robotic Navigation via Confidence-Controlled Exploration [72.24964965882783]
Confidence-Controlled Exploration (CCE) is designed to enhance the training sample efficiency of reinforcement learning algorithms for sparse reward settings such as robot navigation.
CCE is based on a novel relationship we provide between gradient estimation and policy entropy.
We demonstrate through simulated and real-world experiments that CCE outperforms conventional methods that employ constant trajectory lengths and entropy regularization.
arXiv Detail & Related papers (2023-06-09T18:45:15Z) - Latent Exploration for Reinforcement Learning [87.42776741119653]
In Reinforcement Learning, agents learn policies by exploring and interacting with the environment.
We propose LATent TIme-Correlated Exploration (Lattice), a method to inject temporally-correlated noise into the latent state of the policy network.
arXiv Detail & Related papers (2023-05-31T17:40:43Z) - Exploration via Planning for Information about the Optimal Trajectory [67.33886176127578]
We develop a method that allows us to plan for exploration while taking the task and the current knowledge into account.
We demonstrate that our method learns strong policies with 2x fewer samples than strong exploration baselines.
arXiv Detail & Related papers (2022-10-06T20:28:55Z) - Entropy Augmented Reinforcement Learning [0.0]
We propose a shifted Markov decision process (MDP) to encourage the exploration and reinforce the ability of escaping from suboptimums.
Our experiments test augmented TRPO and PPO on MuJoCo benchmark tasks, of an indication that the agent is heartened towards higher reward regions.
arXiv Detail & Related papers (2022-08-19T13:09:32Z) - Explore and Control with Adversarial Surprise [78.41972292110967]
Reinforcement learning (RL) provides a framework for learning goal-directed policies given user-specified rewards.
We propose a new unsupervised RL technique based on an adversarial game which pits two policies against each other to compete over the amount of surprise an RL agent experiences.
We show that our method leads to the emergence of complex skills by exhibiting clear phase transitions.
arXiv Detail & Related papers (2021-07-12T17:58:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.