Learning Generalizable Policy for Obstacle-Aware Autonomous Drone Racing
- URL: http://arxiv.org/abs/2411.04246v1
- Date: Wed, 06 Nov 2024 20:25:43 GMT
- Title: Learning Generalizable Policy for Obstacle-Aware Autonomous Drone Racing
- Authors: Yueqian Liu,
- Abstract summary: This study addresses the challenge of developing a generalizable obstacle-aware drone racing policy.
We propose applying domain randomization on racing tracks and obstacle configurations before every rollout.
The proposed randomization strategy is shown to be effective through simulated experiments where drones reach speeds of up to 70 km/h.
- Score: 0.0
- License:
- Abstract: Autonomous drone racing has gained attention for its potential to push the boundaries of drone navigation technologies. While much of the existing research focuses on racing in obstacle-free environments, few studies have addressed the complexities of obstacle-aware racing, and approaches presented in these studies often suffer from overfitting, with learned policies generalizing poorly to new environments. This work addresses the challenge of developing a generalizable obstacle-aware drone racing policy using deep reinforcement learning. We propose applying domain randomization on racing tracks and obstacle configurations before every rollout, combined with parallel experience collection in randomized environments to achieve the goal. The proposed randomization strategy is shown to be effective through simulated experiments where drones reach speeds of up to 70 km/h, racing in unseen cluttered environments. This study serves as a stepping stone toward learning robust policies for obstacle-aware drone racing and general-purpose drone navigation in cluttered environments. Code is available at https://github.com/ErcBunny/IsaacGymEnvs.
Related papers
- RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - Continual Learning for Robust Gate Detection under Dynamic Lighting in Autonomous Drone Racing [27.18598697503772]
This study introduces a perception technique for detecting drone racing gates under illumination variations.
The proposed technique relies upon a lightweight neural network backbone augmented with capabilities for continual learning.
arXiv Detail & Related papers (2024-05-02T07:21:12Z) - WROOM: An Autonomous Driving Approach for Off-Road Navigation [17.74237088460657]
We design an end-to-end reinforcement learning (RL) system for an autonomous vehicle in off-road environments.
We warm-start the agent by imitating a rule-based controller and utilize Proximal Policy Optimization (PPO) to improve the policy.
We propose a novel simulation environment to replicate off-road driving scenarios and deploy our proposed approach on a real buggy RC car.
arXiv Detail & Related papers (2024-04-12T23:55:59Z) - A Dual Curriculum Learning Framework for Multi-UAV Pursuit-Evasion in Diverse Environments [15.959963737956848]
This paper addresses multi-UAV pursuit-evasion, where a group of drones cooperate to capture a fast evader in a confined environment with obstacles.
Existing algorithms, which simplify the pursuit-evasion problem, often lack expressive coordination strategies and struggle to capture the evader in extreme scenarios.
We introduce a dual curriculum learning framework, named DualCL, which addresses multi-UAV pursuit-evasion in diverse environments and demonstrates zero-shot transfer ability to unseen scenarios.
arXiv Detail & Related papers (2023-12-19T15:39:09Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free
Reinforcement Learning [86.06110576808824]
Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments.
Recent advancements in machine learning algorithms and libraries combined with a carefully tuned robot controller lead to learning quadruped in only 20 minutes in the real world.
arXiv Detail & Related papers (2022-08-16T17:37:36Z) - Learning High-Speed Flight in the Wild [101.33104268902208]
We propose an end-to-end approach that can autonomously fly quadrotors through complex natural and man-made environments at high speeds.
The key principle is to directly map noisy sensory observations to collision-free trajectories in a receding-horizon fashion.
By simulating realistic sensor noise, our approach achieves zero-shot transfer from simulation to challenging real-world environments.
arXiv Detail & Related papers (2021-10-11T09:43:11Z) - Vision-Based Mobile Robotics Obstacle Avoidance With Deep Reinforcement
Learning [49.04274612323564]
Obstacle avoidance is a fundamental and challenging problem for autonomous navigation of mobile robots.
In this paper, we consider the problem of obstacle avoidance in simple 3D environments where the robot has to solely rely on a single monocular camera.
We tackle the obstacle avoidance problem as a data-driven end-to-end deep learning approach.
arXiv Detail & Related papers (2021-03-08T13:05:46Z) - AirSim Drone Racing Lab [56.68291351736057]
AirSim Drone Racing Lab is a simulation framework for enabling machine learning research in this domain.
Our framework enables generation of racing tracks in multiple photo-realistic environments.
We used our framework to host a simulation based drone racing competition at NeurIPS 2019.
arXiv Detail & Related papers (2020-03-12T08:06:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.