Low-level Pose Control of Tilting Multirotor for Wall Perching Tasks
Using Reinforcement Learning
- URL: http://arxiv.org/abs/2108.05457v1
- Date: Wed, 11 Aug 2021 21:39:51 GMT
- Title: Low-level Pose Control of Tilting Multirotor for Wall Perching Tasks
Using Reinforcement Learning
- Authors: Hyungyu Lee, Myeongwoo Jeong, Chanyoung Kim, Hyungtae Lim, Changgue
Park, Sungwon Hwang, and Hyun Myung
- Abstract summary: We propose a novel reinforcement learning-based method to control a tilting multirotor on real-world applications.
Our proposed method shows robust controllability by overcoming the complex dynamics of tilting multirotors.
- Score: 2.5903488573278284
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, needs for unmanned aerial vehicles (UAVs) that are attachable to
the wall have been highlighted. As one of the ways to address the need,
researches on various tilting multirotors that can increase maneuverability has
been employed. Unfortunately, existing studies on the tilting multirotors
require considerable amounts of prior information on the complex dynamic model.
Meanwhile, reinforcement learning on quadrotors has been studied to mitigate
this issue. Yet, these are only been applied to standard quadrotors, whose
systems are less complex than those of tilting multirotors. In this paper, a
novel reinforcement learning-based method is proposed to control a tilting
multirotor on real-world applications, which is the first attempt to apply
reinforcement learning to a tilting multirotor. To do so, we propose a novel
reward function for a neural network model that takes power efficiency into
account. The model is initially trained over a simulated environment and then
fine-tuned using real-world data in order to overcome the sim-to-real gap
issue. Furthermore, a novel, efficient state representation with respect to the
goal frame that helps the network learn optimal policy better is proposed. As
verified on real-world experiments, our proposed method shows robust
controllability by overcoming the complex dynamics of tilting multirotors.
Related papers
- Revisiting Robust RAG: Do We Still Need Complex Robust Training in the Era of Powerful LLMs? [69.38149239733994]
We investigate whether complex robust training strategies remain necessary as model capacity grows.
We find that as models become more powerful, the performance gains brought by complex robust training methods drop off dramatically.
Our findings suggest that RAG systems can benefit from simpler architectures and training strategies as models become more powerful.
arXiv Detail & Related papers (2025-02-17T03:34:31Z) - Perspectives for Direct Interpretability in Multi-Agent Deep Reinforcement Learning [0.41783829807634765]
Multi-Agent Deep Reinforcement Learning (MADRL) was proven efficient in solving complex problems in robotics or games.
This paper advocates for direct interpretability, generating post hoc explanations directly from trained models.
We explore modern methods, including relevance backpropagation, knowledge edition, model steering, activation patching, sparse autoencoders and circuit discovery.
arXiv Detail & Related papers (2025-02-02T09:15:27Z) - RILe: Reinforced Imitation Learning [60.63173816209543]
RILe is a framework that combines the strengths of imitation learning and inverse reinforcement learning to learn a dense reward function efficiently.
Our framework produces high-performing policies in high-dimensional tasks where direct imitation fails to replicate complex behaviors.
arXiv Detail & Related papers (2024-06-12T17:56:31Z) - Learning to Fly in Seconds [7.259696592534715]
We show how curriculum learning and a highly optimized simulator enhance sample complexity and lead to fast training times.
Our framework enables Simulation-to-Reality (Sim2Real) transfer for direct control after only 18 seconds of training on a consumer-grade laptop.
arXiv Detail & Related papers (2023-11-22T01:06:45Z) - Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - PASTA: Pretrained Action-State Transformer Agents [10.654719072766495]
Self-supervised learning has brought about a revolutionary paradigm shift in various computing domains.
Recent approaches involve pre-training transformer models on vast amounts of unlabeled data.
In reinforcement learning, researchers have recently adapted these approaches, developing models pre-trained on expert trajectories.
arXiv Detail & Related papers (2023-07-20T15:09:06Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Learning to Fly -- a Gym Environment with PyBullet Physics for
Reinforcement Learning of Multi-agent Quadcopter Control [0.0]
We propose an open-source environment for multiple quadcopters based on the Bullet physics engine.
Its multi-agent and vision based reinforcement learning interfaces, as well as the support of realistic collisions and aerodynamic effects, make it, to the best of our knowledge, a first of its kind.
arXiv Detail & Related papers (2021-03-03T02:47:59Z) - UPDeT: Universal Multi-agent Reinforcement Learning via Policy
Decoupling with Transformers [108.92194081987967]
We make the first attempt to explore a universal multi-agent reinforcement learning pipeline, designing one single architecture to fit tasks.
Unlike previous RNN-based models, we utilize a transformer-based model to generate a flexible policy.
The proposed model, named as Universal Policy Decoupling Transformer (UPDeT), further relaxes the action restriction and makes the multi-agent task's decision process more explainable.
arXiv Detail & Related papers (2021-01-20T07:24:24Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.