Dynamic deep-reinforcement-learning algorithm in Partially Observed
Markov Decision Processes
- URL: http://arxiv.org/abs/2307.15931v1
- Date: Sat, 29 Jul 2023 08:52:35 GMT
- Title: Dynamic deep-reinforcement-learning algorithm in Partially Observed
Markov Decision Processes
- Authors: Saki Omi, Hyo-Sang Shin, Namhoon Cho, Antonios Tsourdos
- Abstract summary: This study shows the benefit of action sequence inclusion in order to solve Partially Observable Markov Decision Process.
The developed algorithms showed enhanced robustness of controller performance against different types of external disturbances.
- Score: 6.729108277517129
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning has been greatly improved in recent studies and an
increased interest in real-world implementation has emerged in recent years. In
many cases, due to the non-static disturbances, it becomes challenging for the
agent to keep the performance. The disturbance results in the environment
called Partially Observable Markov Decision Process. In common practice,
Partially Observable Markov Decision Process is handled by introducing an
additional estimator, or Recurrent Neural Network is utilized in the context of
reinforcement learning. Both of the cases require to process sequential
information on the trajectory. However, there are only a few studies
investigating the effect of information to consider and the network structure
to handle them. This study shows the benefit of action sequence inclusion in
order to solve Partially Observable Markov Decision Process. Several structures
and approaches are proposed to extend one of the latest deep reinforcement
learning algorithms with LSTM networks. The developed algorithms showed
enhanced robustness of controller performance against different types of
external disturbances that are added to observation.
Related papers
- Contrastive-Adversarial and Diffusion: Exploring pre-training and fine-tuning strategies for sulcal identification [3.0398616939692777]
Techniques like adversarial learning, contrastive learning, diffusion denoising learning, and ordinary reconstruction learning have become standard.
The study aims to elucidate the advantages of pre-training techniques and fine-tuning strategies to enhance the learning process of neural networks.
arXiv Detail & Related papers (2024-05-29T15:44:51Z) - GASE: Graph Attention Sampling with Edges Fusion for Solving Vehicle Routing Problems [6.084414764415137]
We propose an adaptive Graph Attention Sampling with the Edges Fusion framework to solve vehicle routing problems.
Our proposed model outperforms the existing methods by 2.08%-6.23% and shows stronger generalization ability.
arXiv Detail & Related papers (2024-05-21T03:33:07Z) - Provable Representation with Efficient Planning for Partial Observable Reinforcement Learning [74.67655210734338]
In most real-world reinforcement learning applications, state information is only partially observable, which breaks the Markov decision process assumption.
We develop a representation-based perspective that leads to a coherent framework and tractable algorithmic approach for practical reinforcement learning from partial observations.
We empirically demonstrate the proposed algorithm can surpass state-of-the-art performance with partial observations across various benchmarks.
arXiv Detail & Related papers (2023-11-20T23:56:58Z) - An Analytic End-to-End Deep Learning Algorithm based on Collaborative
Learning [5.710971447109949]
This paper presents a convergence analysis for end-to-end deep learning of fully connected neural networks (FNN) with smooth activation functions.
The proposed method avoids any potential chattering problem, and it also does not easily lead to gradient vanishing problems.
arXiv Detail & Related papers (2023-05-26T08:09:03Z) - ASR: Attention-alike Structural Re-parameterization [53.019657810468026]
We propose a simple-yet-effective attention-alike structural re- parameterization (ASR) that allows us to achieve SRP for a given network while enjoying the effectiveness of the attention mechanism.
In this paper, we conduct extensive experiments from a statistical perspective and discover an interesting phenomenon Stripe Observation, which reveals that channel attention values quickly approach some constant vectors during training.
arXiv Detail & Related papers (2023-04-13T08:52:34Z) - Opportunistic Episodic Reinforcement Learning [9.364712393700056]
opportunistic reinforcement learning is a new variant of reinforcement learning problems where the regret of selecting a suboptimal action varies under an external environmental condition known as the variation factor.
Our intuition is to exploit more when the variation factor is high, and explore more when the variation factor is low.
Our algorithms balance the exploration-exploitation trade-off for reinforcement learning by introducing variation factor-dependent optimism to guide exploration.
arXiv Detail & Related papers (2022-10-24T18:02:33Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Learning Dynamics and Generalization in Reinforcement Learning [59.530058000689884]
We show theoretically that temporal difference learning encourages agents to fit non-smooth components of the value function early in training.
We show that neural networks trained using temporal difference algorithms on dense reward tasks exhibit weaker generalization between states than randomly networks and gradient networks trained with policy methods.
arXiv Detail & Related papers (2022-06-05T08:49:16Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Amortized Variational Deep Q Network [28.12600565839504]
We propose an amortized variational inference framework to approximate the posterior distribution of the action value function in Deep Q Network.
We show that the amortized framework can results in significant less learning parameters than existing state-of-the-art method.
arXiv Detail & Related papers (2020-11-03T13:48:18Z) - Untangling tradeoffs between recurrence and self-attention in neural
networks [81.30894993852813]
We present a formal analysis of how self-attention affects gradient propagation in recurrent networks.
We prove that it mitigates the problem of vanishing gradients when trying to capture long-term dependencies.
We propose a relevancy screening mechanism that allows for a scalable use of sparse self-attention with recurrence.
arXiv Detail & Related papers (2020-06-16T19:24:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.