OmniDrones: An Efficient and Flexible Platform for Reinforcement
Learning in Drone Control
- URL: http://arxiv.org/abs/2309.12825v1
- Date: Fri, 22 Sep 2023 12:26:36 GMT
- Title: OmniDrones: An Efficient and Flexible Platform for Reinforcement
Learning in Drone Control
- Authors: Botian Xu, Feng Gao, Chao Yu, Ruize Zhang, Yi Wu, Yu Wang
- Abstract summary: We introduce OmniDrones, an efficient and flexible platform tailored for reinforcement learning in drone control.
It employs a bottom-up design approach that allows users to easily design and experiment with various application scenarios.
It also offers a range of benchmark tasks, presenting challenges ranging from single-drone hovering to over-actuated system tracking.
- Score: 16.570253723823996
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we introduce OmniDrones, an efficient and flexible platform
tailored for reinforcement learning in drone control, built on Nvidia's
Omniverse Isaac Sim. It employs a bottom-up design approach that allows users
to easily design and experiment with various application scenarios on top of
GPU-parallelized simulations. It also offers a range of benchmark tasks,
presenting challenges ranging from single-drone hovering to over-actuated
system tracking. In summary, we propose an open-sourced drone simulation
platform, equipped with an extensive suite of tools for drone learning. It
includes 4 drone models, 5 sensor modalities, 4 control modes, over 10
benchmark tasks, and a selection of widely used RL baselines. To showcase the
capabilities of OmniDrones and to support future research, we also provide
preliminary results on these benchmark tasks. We hope this platform will
encourage further studies on applying RL to practical drone systems.
Related papers
- Tiny Multi-Agent DRL for Twins Migration in UAV Metaverses: A Multi-Leader Multi-Follower Stackelberg Game Approach [57.15309977293297]
The synergy between Unmanned Aerial Vehicles (UAVs) and metaverses is giving rise to an emerging paradigm named UAV metaverses.
We propose a tiny machine learning-based Stackelberg game framework based on pruning techniques for efficient UT migration in UAV metaverses.
arXiv Detail & Related papers (2024-01-18T02:14:13Z) - Chasing the Intruder: A Reinforcement Learning Approach for Tracking
Intruder Drones [0.08192907805418582]
We propose a reinforcement learning based approach for identifying and tracking any intruder drone using a chaser drone.
Our proposed solution uses computer vision techniques interleaved with the policy learning framework of reinforcement learning.
The results show that the reinforcement learning based policy converges to identify and track the intruder drone.
arXiv Detail & Related papers (2023-09-10T16:31:40Z) - TransVisDrone: Spatio-Temporal Transformer for Vision-based
Drone-to-Drone Detection in Aerial Videos [57.92385818430939]
Drone-to-drone detection using visual feed has crucial applications, such as detecting drone collisions, detecting drone attacks, or coordinating flight with other drones.
Existing methods are computationally costly, follow non-end-to-end optimization, and have complex multi-stage pipelines, making them less suitable for real-time deployment on edge devices.
We propose a simple yet effective framework, itTransVisDrone, that provides an end-to-end solution with higher computational efficiency.
arXiv Detail & Related papers (2022-10-16T03:05:13Z) - GNM: A General Navigation Model to Drive Any Robot [67.40225397212717]
General goal-conditioned model for vision-based navigation can be trained on data obtained from many distinct but structurally similar robots.
We analyze the necessary design decisions for effective data sharing across robots.
We deploy the trained GNM on a range of new robots, including an under quadrotor.
arXiv Detail & Related papers (2022-10-07T07:26:41Z) - Learning a Single Near-hover Position Controller for Vastly Different
Quadcopters [56.37274861303324]
This paper proposes an adaptive near-hover position controller for quadcopters.
It can be deployed to quadcopters of very different mass, size and motor constants.
It also shows rapid adaptation to unknown disturbances during runtime.
arXiv Detail & Related papers (2022-09-19T17:55:05Z) - Monocular visual autonomous landing system for quadcopter drones using
software in the loop [0.696125353550498]
A monocular vision-only approach to landing pad tracking made it possible to effectively implement the system in an F450 quadcopter drone.
The proposed monocular vision-only approach to landing pad tracking made it possible to effectively implement the system in an F450 quadcopter drone with the standard computational capabilities of an Odroid XU4 embedded processor.
arXiv Detail & Related papers (2021-08-14T21:28:28Z) - A simple vision-based navigation and control strategy for autonomous
drone racing [0.0]
We present a control system that allows a drone to fly autonomously through a series of gates marked with ArUco tags.
A simple and low-cost DJI Tello EDU quad-rotor platform was used.
We have created a Python application that enables the communication with the drone over WiFi, realises drone positioning based on visual feedback, and generates control.
arXiv Detail & Related papers (2021-04-20T08:02:02Z) - Long-Term Planning with Deep Reinforcement Learning on Autonomous Drones [0.0]
We study a long-term planning scenario that is based on drone racing competitions held in real life.
We conducted this experiment on a framework created for "Game of Drones: Drone Racing Competition" at NeurIPS 2019.
arXiv Detail & Related papers (2020-07-11T06:16:50Z) - AirSim Drone Racing Lab [56.68291351736057]
AirSim Drone Racing Lab is a simulation framework for enabling machine learning research in this domain.
Our framework enables generation of racing tracks in multiple photo-realistic environments.
We used our framework to host a simulation based drone racing competition at NeurIPS 2019.
arXiv Detail & Related papers (2020-03-12T08:06:06Z) - University-1652: A Multi-view Multi-source Benchmark for Drone-based
Geo-localization [87.74121935246937]
We introduce a new multi-view benchmark for drone-based geo-localization, named University-1652.
University-1652 contains data from three platforms, i.e., synthetic drones, satellites and ground cameras of 1,652 university buildings around the world.
Experiments show that University-1652 helps the model to learn the viewpoint-invariant features and also has good generalization ability in the real-world scenario.
arXiv Detail & Related papers (2020-02-27T15:24:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.