Learning High-Speed Flight in the Wild
- URL: http://arxiv.org/abs/2110.05113v1
- Date: Mon, 11 Oct 2021 09:43:11 GMT
- Title: Learning High-Speed Flight in the Wild
- Authors: Antonio Loquercio, Elia Kaufmann, Ren\'e Ranftl, Matthias M\"uller,
Vladlen Koltun, Davide Scaramuzza
- Abstract summary: We propose an end-to-end approach that can autonomously fly quadrotors through complex natural and man-made environments at high speeds.
The key principle is to directly map noisy sensory observations to collision-free trajectories in a receding-horizon fashion.
By simulating realistic sensor noise, our approach achieves zero-shot transfer from simulation to challenging real-world environments.
- Score: 101.33104268902208
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quadrotors are agile. Unlike most other machines, they can traverse extremely
complex environments at high speeds. To date, only expert human pilots have
been able to fully exploit their capabilities. Autonomous operation with
on-board sensing and computation has been limited to low speeds.
State-of-the-art methods generally separate the navigation problem into
subtasks: sensing, mapping, and planning. While this approach has proven
successful at low speeds, the separation it builds upon can be problematic for
high-speed navigation in cluttered environments. Indeed, the subtasks are
executed sequentially, leading to increased processing latency and a
compounding of errors through the pipeline. Here we propose an end-to-end
approach that can autonomously fly quadrotors through complex natural and
man-made environments at high speeds, with purely onboard sensing and
computation. The key principle is to directly map noisy sensory observations to
collision-free trajectories in a receding-horizon fashion. This direct mapping
drastically reduces processing latency and increases robustness to noisy and
incomplete perception. The sensorimotor mapping is performed by a convolutional
network that is trained exclusively in simulation via privileged learning:
imitating an expert with access to privileged information. By simulating
realistic sensor noise, our approach achieves zero-shot transfer from
simulation to challenging real-world environments that were never experienced
during training: dense forests, snow-covered terrain, derailed trains, and
collapsed buildings. Our work demonstrates that end-to-end policies trained in
simulation enable high-speed autonomous flight through challenging
environments, outperforming traditional obstacle avoidance pipelines.
Related papers
- Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - Gaussian Splatting to Real World Flight Navigation Transfer with Liquid Networks [93.38375271826202]
We present a method to improve generalization and robustness to distribution shifts in sim-to-real visual quadrotor navigation tasks.
We first build a simulator by integrating Gaussian splatting with quadrotor flight dynamics, and then, train robust navigation policies using Liquid neural networks.
In this way, we obtain a full-stack imitation learning protocol that combines advances in 3D Gaussian splatting radiance field rendering, programming of expert demonstration training data, and the task understanding capabilities of Liquid networks.
arXiv Detail & Related papers (2024-06-21T13:48:37Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - Learning Perception-Aware Agile Flight in Cluttered Environments [38.59659342532348]
We propose a method to learn neural network policies that achieve perception-aware, minimum-time flight in cluttered environments.
Our approach tightly couples perception and control, showing a significant advantage in computation speed (10x faster) and success rate.
We demonstrate the closed-loop control performance using a physical quadrotor and hardware-in-the-loop simulation at speeds up to 50km/h.
arXiv Detail & Related papers (2022-10-04T18:18:58Z) - Inverted Landing in a Small Aerial Robot via Deep Reinforcement Learning
for Triggering and Control of Rotational Maneuvers [11.29285364660789]
Inverted landing in a rapid and robust manner is a challenging feat for aerial robots, especially while depending entirely on onboard sensing and computation.
Previous work has identified a direct causal connection between a series of onboard visual cues and kinematic actions that allow for reliable execution of this challenging aerobatic maneuver in small aerial robots.
In this work, we first utilized Deep Reinforcement Learning and a physics-based simulation to obtain a general, optimal control policy for robust inverted landing.
arXiv Detail & Related papers (2022-09-22T14:38:10Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds [96.74836678572582]
We present a learning-based approach that allows rapid online adaptation by incorporating pretrained representations through deep learning.
Neural-Fly achieves precise flight control with substantially smaller tracking error than state-of-the-art nonlinear and adaptive controllers.
arXiv Detail & Related papers (2022-05-13T21:55:28Z) - Risk-Aware Off-Road Navigation via a Learned Speed Distribution Map [39.54575497596679]
This work proposes a new representation of traversability based exclusively on robot speed that can be learned from data.
The proposed algorithm learns to predict a distribution of speeds the robot could achieve, conditioned on the environment semantics and commanded speed.
Numerical simulations demonstrate that the proposed risk-aware planning algorithm leads to faster average time-to-goals.
arXiv Detail & Related papers (2022-03-25T03:08:02Z) - Autonomous Off-road Navigation over Extreme Terrains with
Perceptually-challenging Conditions [7.514178230130502]
We propose a framework for resilient autonomous computation in perceptually challenging environments with mobility-stressing elements.
We propose a fast settling algorithm to generate robust multi-fidelity traversability estimates in real-time.
The proposed approach was deployed on multiple physical systems including skid-steer and tracked robots, a high-speed RC car and legged robots.
arXiv Detail & Related papers (2021-01-26T22:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.