Deep Model-Based Reinforcement Learning for High-Dimensional Problems, a
Survey
- URL: http://arxiv.org/abs/2008.05598v2
- Date: Tue, 1 Dec 2020 22:40:17 GMT
- Title: Deep Model-Based Reinforcement Learning for High-Dimensional Problems, a
Survey
- Authors: Aske Plaat, Walter Kosters, Mike Preuss
- Abstract summary: Model-based reinforcement learning creates an explicit model of the environment dynamics to reduce the need for environment samples.
A challenge for deep model-based methods is to achieve high predictive power while maintaining low sample complexity.
We propose a taxonomy based on three approaches: using explicit planning on given transitions, using explicit planning on learned transitions, and end-to-end learning of both planning and transitions.
- Score: 1.2031796234206134
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep reinforcement learning has shown remarkable success in the past few
years. Highly complex sequential decision making problems have been solved in
tasks such as game playing and robotics. Unfortunately, the sample complexity
of most deep reinforcement learning methods is high, precluding their use in
some important applications. Model-based reinforcement learning creates an
explicit model of the environment dynamics to reduce the need for environment
samples. Current deep learning methods use high-capacity networks to solve
high-dimensional problems. Unfortunately, high-capacity models typically
require many samples, negating the potential benefit of lower sample complexity
in model-based methods. A challenge for deep model-based methods is therefore
to achieve high predictive power while maintaining low sample complexity. In
recent years, many model-based methods have been introduced to address this
challenge. In this paper, we survey the contemporary model-based landscape.
First we discuss definitions and relations to other fields. We propose a
taxonomy based on three approaches: using explicit planning on given
transitions, using explicit planning on learned transitions, and end-to-end
learning of both planning and transitions. We use these approaches to organize
a comprehensive overview of important recent developments such as latent
models. We describe methods and benchmarks, and we suggest directions for
future work for each of the approaches. Among promising research directions are
curriculum learning, uncertainty modeling, and use of latent models for
transfer learning.
Related papers
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities [89.40778301238642]
Model merging is an efficient empowerment technique in the machine learning community.
There is a significant gap in the literature regarding a systematic and thorough review of these techniques.
arXiv Detail & Related papers (2024-08-14T16:58:48Z) - Learning-based Models for Vulnerability Detection: An Extensive Study [3.1317409221921144]
We extensively and comprehensively investigate two types of state-of-the-art learning-based approaches.
We experimentally demonstrate the priority of sequence-based models and the limited abilities of both graph-based models.
arXiv Detail & Related papers (2024-08-14T13:01:30Z) - Deep Generative Models in Robotics: A Survey on Learning from Multimodal Demonstrations [52.11801730860999]
In recent years, the robot learning community has shown increasing interest in using deep generative models to capture the complexity of large datasets.
We present the different types of models that the community has explored, such as energy-based models, diffusion models, action value maps, or generative adversarial networks.
We also present the different types of applications in which deep generative models have been used, from grasp generation to trajectory generation or cost learning.
arXiv Detail & Related papers (2024-08-08T11:34:31Z) - Towards Learning Stochastic Population Models by Gradient Descent [0.0]
We show that simultaneous estimation of parameters and structure poses major challenges for optimization procedures.
We demonstrate accurate estimation of models but find that enforcing the inference of parsimonious, interpretable models drastically increases the difficulty.
arXiv Detail & Related papers (2024-04-10T14:38:58Z) - MinT: Boosting Generalization in Mathematical Reasoning via Multi-View
Fine-Tuning [53.90744622542961]
Reasoning in mathematical domains remains a significant challenge for small language models (LMs)
We introduce a new method that exploits existing mathematical problem datasets with diverse annotation styles.
Experimental results show that our strategy enables a LLaMA-7B model to outperform prior approaches.
arXiv Detail & Related papers (2023-07-16T05:41:53Z) - Deep Generative Models for Decision-Making and Control [4.238809918521607]
The dual purpose of this thesis is to study the reasons for these shortcomings and to propose solutions for the uncovered problems.
We highlight how inference techniques from the contemporary generative modeling toolbox, including beam search, can be reinterpreted as viable planning strategies for reinforcement learning problems.
arXiv Detail & Related papers (2023-06-15T01:54:30Z) - SAGE: Generating Symbolic Goals for Myopic Models in Deep Reinforcement
Learning [18.37286885057802]
We propose an algorithm combining learning and planning to exploit a previously unusable class of incomplete models.
This combines the strengths of symbolic planning and neural learning approaches in a novel way that outperforms competing methods on variations of taxi world and Minecraft.
arXiv Detail & Related papers (2022-03-09T22:55:53Z) - High-Accuracy Model-Based Reinforcement Learning, a Survey [2.0196229393131726]
Deep reinforcement learning has shown remarkable success in game playing and robotics.
To reduce the number of environment samples, model-based reinforcement learning creates an explicit model of the environment dynamics.
Some of these methods succeed in achieving high accuracy at low sample complexity, most do so either in a robotics or in a games context.
arXiv Detail & Related papers (2021-07-17T14:01:05Z) - Model Complexity of Deep Learning: A Survey [79.20117679251766]
We conduct a systematic overview of the latest studies on model complexity in deep learning.
We review the existing studies on those two categories along four important factors, including model framework, model size, optimization process and data complexity.
arXiv Detail & Related papers (2021-03-08T22:39:32Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Goal-Aware Prediction: Learning to Model What Matters [105.43098326577434]
One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
arXiv Detail & Related papers (2020-07-14T16:42:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.