AirSim Drone Racing Lab
- URL: http://arxiv.org/abs/2003.05654v1
- Date: Thu, 12 Mar 2020 08:06:06 GMT
- Title: AirSim Drone Racing Lab
- Authors: Ratnesh Madaan, Nicholas Gyde, Sai Vemprala, Matthew Brown, Keiko
Nagami, Tim Taubner, Eric Cristofalo, Davide Scaramuzza, Mac Schwager, Ashish
Kapoor
- Abstract summary: AirSim Drone Racing Lab is a simulation framework for enabling machine learning research in this domain.
Our framework enables generation of racing tracks in multiple photo-realistic environments.
We used our framework to host a simulation based drone racing competition at NeurIPS 2019.
- Score: 56.68291351736057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous drone racing is a challenging research problem at the intersection
of computer vision, planning, state estimation, and control. We introduce
AirSim Drone Racing Lab, a simulation framework for enabling fast prototyping
of algorithms for autonomy and enabling machine learning research in this
domain, with the goal of reducing the time, money, and risks associated with
field robotics. Our framework enables generation of racing tracks in multiple
photo-realistic environments, orchestration of drone races, comes with a suite
of gate assets, allows for multiple sensor modalities (monocular, depth,
neuromorphic events, optical flow), different camera models, and benchmarking
of planning, control, computer vision, and learning-based algorithms. We used
our framework to host a simulation based drone racing competition at NeurIPS
2019. The competition binaries are available at our github repository.
Related papers
- Learning Generalizable Policy for Obstacle-Aware Autonomous Drone Racing [0.0]
This study addresses the challenge of developing a generalizable obstacle-aware drone racing policy.
We propose applying domain randomization on racing tracks and obstacle configurations before every rollout.
The proposed randomization strategy is shown to be effective through simulated experiments where drones reach speeds of up to 70 km/h.
arXiv Detail & Related papers (2024-11-06T20:25:43Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - A Deep Reinforcement Learning Strategy for UAV Autonomous Landing on a
Platform [0.0]
We proposed a reinforcement learning framework based on Gazebo that is a kind of physical simulation platform (ROS-RL)
We used three continuous action space reinforcement learning algorithms in the framework to dealing with the problem of autonomous landing of drones.
arXiv Detail & Related papers (2022-09-07T06:33:57Z) - A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free
Reinforcement Learning [86.06110576808824]
Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments.
Recent advancements in machine learning algorithms and libraries combined with a carefully tuned robot controller lead to learning quadruped in only 20 minutes in the real world.
arXiv Detail & Related papers (2022-08-16T17:37:36Z) - Indy Autonomous Challenge -- Autonomous Race Cars at the Handling Limits [81.22616193933021]
The team TUM Auton-omous Motorsports will participate in the Indy Autonomous Challenge in Octo-ber 2021.
It will benchmark its self-driving software-stack by racing one out of ten autonomous Dallara AV-21 racecars at the Indianapolis Motor Speedway.
It is an ideal testing ground for the development of autonomous driving algorithms capable of mastering the most challenging and rare situations.
arXiv Detail & Related papers (2022-02-08T11:55:05Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - Learn-to-Race: A Multimodal Control Environment for Autonomous Racing [23.798765519590734]
We introduce a new environment, where agents Learn-to-Race (L2R) in simulated Formula-E style racing.
Our environment, which includes a simulator and an interfacing training framework, accurately models vehicle dynamics and racing conditions.
Next, we propose the L2R task with challenging metrics, inspired by learning-to-drive challenges, Formula-E racing, and multimodal trajectory prediction for autonomous driving.
arXiv Detail & Related papers (2021-03-22T04:03:06Z) - NeBula: Quest for Robotic Autonomy in Challenging Environments; TEAM
CoSTAR at the DARPA Subterranean Challenge [105.27989489105865]
This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR.
The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy).
arXiv Detail & Related papers (2021-03-21T19:42:26Z) - Long-Term Planning with Deep Reinforcement Learning on Autonomous Drones [0.0]
We study a long-term planning scenario that is based on drone racing competitions held in real life.
We conducted this experiment on a framework created for "Game of Drones: Drone Racing Competition" at NeurIPS 2019.
arXiv Detail & Related papers (2020-07-11T06:16:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.