Learning Perception-Aware Agile Flight in Cluttered Environments
- URL: http://arxiv.org/abs/2210.01841v1
- Date: Tue, 4 Oct 2022 18:18:58 GMT
- Title: Learning Perception-Aware Agile Flight in Cluttered Environments
- Authors: Yunlong Song, Kexin Shi, Robert Penicka, and Davide Scaramuzza
- Abstract summary: We propose a method to learn neural network policies that achieve perception-aware, minimum-time flight in cluttered environments.
Our approach tightly couples perception and control, showing a significant advantage in computation speed (10x faster) and success rate.
We demonstrate the closed-loop control performance using a physical quadrotor and hardware-in-the-loop simulation at speeds up to 50km/h.
- Score: 38.59659342532348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, neural control policies have outperformed existing model-based
planning-and-control methods for autonomously navigating quadrotors through
cluttered environments in minimum time. However, they are not perception aware,
a crucial requirement in vision-based navigation due to the camera's limited
field of view and the underactuated nature of a quadrotor. We propose a method
to learn neural network policies that achieve perception-aware, minimum-time
flight in cluttered environments. Our method combines imitation learning and
reinforcement learning (RL) by leveraging a privileged learning-by-cheating
framework. Using RL, we first train a perception-aware teacher policy with
full-state information to fly in minimum time through cluttered environments.
Then, we use imitation learning to distill its knowledge into a vision-based
student policy that only perceives the environment via a camera. Our approach
tightly couples perception and control, showing a significant advantage in
computation speed (10x faster) and success rate. We demonstrate the closed-loop
control performance using a physical quadrotor and hardware-in-the-loop
simulation at speeds up to 50km/h.
Related papers
- Bootstrapping Reinforcement Learning with Imitation for Vision-Based Agile Flight [20.92646531472541]
We propose a novel approach that combines the performance of Reinforcement Learning (RL) and the sample efficiency of Imitation Learning (IL)
Our framework contains three phases teacher policy using RL with privileged state information distilling it into a student policy via IL, and adaptive fine-tuning via RL.
Tests show our approach can not only learn in scenarios where RL from scratch fails but also outperforms existing IL methods in both robustness and performance.
arXiv Detail & Related papers (2024-03-18T19:25:57Z) - Learning Speed Adaptation for Flight in Clutter [3.8876619768726157]
Animals learn to adapt speed of their movements to their capabilities and the environment they observe.
Mobile robots should also demonstrate this ability to trade-off aggressiveness and safety for efficiently accomplishing tasks.
This work is to endow flight vehicles with the ability of speed adaptation in prior unknown and partially observable cluttered environments.
arXiv Detail & Related papers (2024-03-07T15:30:54Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free
Reinforcement Learning [86.06110576808824]
Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments.
Recent advancements in machine learning algorithms and libraries combined with a carefully tuned robot controller lead to learning quadruped in only 20 minutes in the real world.
arXiv Detail & Related papers (2022-08-16T17:37:36Z) - Learning High-Speed Flight in the Wild [101.33104268902208]
We propose an end-to-end approach that can autonomously fly quadrotors through complex natural and man-made environments at high speeds.
The key principle is to directly map noisy sensory observations to collision-free trajectories in a receding-horizon fashion.
By simulating realistic sensor noise, our approach achieves zero-shot transfer from simulation to challenging real-world environments.
arXiv Detail & Related papers (2021-10-11T09:43:11Z) - Learning a State Representation and Navigation in Cluttered and Dynamic
Environments [6.909283975004628]
We present a learning-based pipeline to realise local navigation with a quadrupedal robot in cluttered environments.
The robot is able to safely locomote to a target location based on frames from a depth camera without any explicit mapping of the environment.
We show that our system can handle noisy depth images, avoid dynamic obstacles unseen during training, and is endowed with local spatial awareness.
arXiv Detail & Related papers (2021-03-07T13:19:06Z) - ViNG: Learning Open-World Navigation with Visual Goals [82.84193221280216]
We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
arXiv Detail & Related papers (2020-12-17T18:22:32Z) - Robot Perception enables Complex Navigation Behavior via Self-Supervised
Learning [23.54696982881734]
We propose an approach to unify successful robot perception systems for active target-driven navigation tasks via reinforcement learning (RL)
Our method temporally incorporates compact motion and visual perception data, directly obtained using self-supervision from a single image sequence.
We demonstrate our approach on two real-world driving dataset, KITTI and Oxford RobotCar, using the new interactive CityLearn framework.
arXiv Detail & Related papers (2020-06-16T07:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.