Curiosity-Driven Reinforcement Learning based Low-Level Flight Control
- URL: http://arxiv.org/abs/2307.15724v1
- Date: Fri, 28 Jul 2023 11:46:28 GMT
- Title: Curiosity-Driven Reinforcement Learning based Low-Level Flight Control
- Authors: Amir Ramezani Dooraki and Alexandros Iosifidis
- Abstract summary: This work proposes an algorithm based on the drive of curiosity for autonomous learning to control by generating proper motor speeds from odometry data.
We ran tests using on-policy, off-policy, on-policy plus curiosity, and the proposed algorithm and visualized the effect of curiosity in evolving exploration patterns.
- Score: 95.42181254494287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Curiosity is one of the main motives in many of the natural creatures with
measurable levels of intelligence for exploration and, as a result, more
efficient learning. It makes it possible for humans and many animals to explore
efficiently by searching for being in states that make them surprised with the
goal of learning more about what they do not know. As a result, while being
curious, they learn better. In the machine learning literature, curiosity is
mostly combined with reinforcement learning-based algorithms as an intrinsic
reward. This work proposes an algorithm based on the drive of curiosity for
autonomous learning to control by generating proper motor speeds from odometry
data. The quadcopter controlled by our proposed algorithm can pass through
obstacles while controlling the Yaw direction of the quad-copter toward the
desired location. To achieve that, we also propose a new curiosity approach
based on prediction error. We ran tests using on-policy, off-policy, on-policy
plus curiosity, and the proposed algorithm and visualized the effect of
curiosity in evolving exploration patterns. Results show the capability of the
proposed algorithm to learn optimal policy and maximize reward where other
algorithms fail to do so.
Related papers
- Reward Finetuning for Faster and More Accurate Unsupervised Object
Discovery [64.41455104593304]
Reinforcement Learning from Human Feedback (RLHF) can improve machine learning models and align them with human preferences.
We propose to adapt similar RL-based methods to unsupervised object discovery.
We demonstrate that our approach is not only more accurate, but also orders of magnitudes faster to train.
arXiv Detail & Related papers (2023-10-29T17:03:12Z) - Reinforcement Learning Algorithms: An Overview and Classification [0.0]
We identify three main environment types and classify reinforcement learning algorithms according to those environment types.
The overview of each algorithm provides insight into the algorithms' foundations and reviews similarities and differences among algorithms.
arXiv Detail & Related papers (2022-09-29T16:58:42Z) - From Psychological Curiosity to Artificial Curiosity: Curiosity-Driven
Learning in Artificial Intelligence Tasks [56.20123080771364]
Psychological curiosity plays a significant role in human intelligence to enhance learning through exploration and information acquisition.
In the Artificial Intelligence (AI) community, artificial curiosity provides a natural intrinsic motivation for efficient learning.
CDL has become increasingly popular, where agents are self-motivated to learn novel knowledge.
arXiv Detail & Related papers (2022-01-20T17:07:03Z) - Open-World Active Learning with Stacking Ensemble for Self-Driving Cars [0.0]
We propose an algorithm to identify not only all the known entities that may appear in front of the car, but also to detect and learn the classes of those unknown objects.
Our approach relies on the DOC algorithm as well as on the Query-by-Committee algorithm.
arXiv Detail & Related papers (2021-09-10T19:06:37Z) - MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven
Reinforcement Learning [65.52675802289775]
We show that an uncertainty aware classifier can solve challenging reinforcement learning problems.
We propose a novel method for computing the normalized maximum likelihood (NML) distribution.
We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions.
arXiv Detail & Related papers (2021-07-15T08:19:57Z) - Discovering Reinforcement Learning Algorithms [53.72358280495428]
Reinforcement learning algorithms update an agent's parameters according to one of several possible rules.
This paper introduces a new meta-learning approach that discovers an entire update rule.
It includes both 'what to predict' (e.g. value functions) and 'how to learn from it' by interacting with a set of environments.
arXiv Detail & Related papers (2020-07-17T07:38:39Z) - Meta-learning curiosity algorithms [26.186627089223624]
We formulate the problem of generating curious behavior as one of meta-learning.
Our rich language of programs combines neural networks with other building blocks such as buffers, nearest-neighbor modules and custom loss functions.
We find two novel curiosity algorithms that perform on par or better than human-designed published curiosity algorithms in domains as disparate as grid navigation with image inputs, acrobot, lunar lander, ant and hopper.
arXiv Detail & Related papers (2020-03-11T14:25:43Z) - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch [76.83052807776276]
We show that it is possible to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.
We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.
We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction in the field.
arXiv Detail & Related papers (2020-03-06T19:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.