Lander.AI: Adaptive Landing Behavior Agent for Expertise in 3D Dynamic
Platform Landings
- URL: http://arxiv.org/abs/2403.06572v2
- Date: Tue, 12 Mar 2024 10:46:33 GMT
- Title: Lander.AI: Adaptive Landing Behavior Agent for Expertise in 3D Dynamic
Platform Landings
- Authors: Robinroy Peter, Lavanya Ratnabala, Demetros Aschu, Aleksey Fedoseev,
Dzmitry Tsetserukou
- Abstract summary: This study introduces an advanced Deep Reinforcement Learning (DRL) agent, Lander:AI, designed to navigate and land on platforms in the presence of windy conditions.
Lander:AI is rigorously trained within the gym-pybullet-drone simulation, an environment that mirrors real-world complexities, including wind turbulence.
The experimental results showcased Lander:AI's high-precision landing and its ability to adapt to moving platforms, even under wind-induced disturbances.
- Score: 2.5022287664959446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mastering autonomous drone landing on dynamic platforms presents formidable
challenges due to unpredictable velocities and external disturbances caused by
the wind, ground effect, turbines or propellers of the docking platform. This
study introduces an advanced Deep Reinforcement Learning (DRL) agent,
Lander:AI, designed to navigate and land on platforms in the presence of windy
conditions, thereby enhancing drone autonomy and safety. Lander:AI is
rigorously trained within the gym-pybullet-drone simulation, an environment
that mirrors real-world complexities, including wind turbulence, to ensure the
agent's robustness and adaptability.
The agent's capabilities were empirically validated with Crazyflie 2.1 drones
across various test scenarios, encompassing both simulated environments and
real-world conditions. The experimental results showcased Lander:AI's
high-precision landing and its ability to adapt to moving platforms, even under
wind-induced disturbances. Furthermore, the system performance was benchmarked
against a baseline PID controller augmented with an Extended Kalman Filter,
illustrating significant improvements in landing precision and error recovery.
Lander:AI leverages bio-inspired learning to adapt to external forces like
birds, enhancing drone adaptability without knowing force magnitudes.This
research not only advances drone landing technologies, essential for inspection
and emergency applications, but also highlights the potential of DRL in
addressing intricate aerodynamic challenges.
Related papers
- DroneWiS: Automated Simulation Testing of small Unmanned Aerial Systems in Realistic Windy Conditions [8.290044674335473]
DroneWiS allows sUAS developers to automatically simulate realistic windy conditions and test the resilience of sUAS against wind.
Unlike current state-of-the-art simulation tools such as Gazebo and AirSim, DroneWiS leverages Computational Fluid Dynamics (CFD) to compute the unique wind flows.
This simulation capability provides deeper insights to developers about the navigation capability of sUAS in challenging and realistic windy conditions.
arXiv Detail & Related papers (2024-08-29T14:25:11Z) - AirPilot: Interpretable PPO-based DRL Auto-Tuned Nonlinear PID Drone Controller for Robust Autonomous Flights [1.947822083318316]
AirPilot is a nonlinear Deep Reinforcement Learning (DRL) - enhanced Proportional Integral Derivative (PID) drone controller.
AirPilot controller combines the simplicity and effectiveness of traditional PID control with the adaptability, learning capability, and optimization potential of DRL.
Airpilot is capable of reducing the navigation error of the default PX4 PID position controller by 90%, improving effective navigation speed of a fine-tuned PID controller by 21%.
arXiv Detail & Related papers (2024-03-30T00:46:43Z) - DATT: Deep Adaptive Trajectory Tracking for Quadrotor Control [62.24301794794304]
Deep Adaptive Trajectory Tracking (DATT) is a learning-based approach that can precisely track arbitrary, potentially infeasible trajectories in the presence of large disturbances in the real world.
DATT significantly outperforms competitive adaptive nonlinear and model predictive controllers for both feasible smooth and infeasible trajectories in unsteady wind fields.
It can efficiently run online with an inference time less than 3.2 ms, less than 1/4 of the adaptive nonlinear model predictive control baseline.
arXiv Detail & Related papers (2023-10-13T12:22:31Z) - Urban Drone Navigation: Autoencoder Learning Fusion for Aerodynamics [2.868732757372218]
This paper presents a method that combines multi-objective reinforcement learning (MORL) with a convolutional autoencoder to improve drone navigation in urban SAR.
The approach uses MORL to achieve multiple goals and the autoencoder for cost-effective wind simulations.
Tested on a New York City model, this method enhances drone SAR operations in complex urban settings.
arXiv Detail & Related papers (2023-10-13T02:57:35Z) - Transfusor: Transformer Diffusor for Controllable Human-like Generation
of Vehicle Lane Changing Trajectories [0.3314882635954752]
The virtual simulation test (VST) has become a prominent approach for testing autonomous driving systems (ADS) and advanced driver assistance systems (ADAS)
It is needed to create more flexible and high-fidelity testing scenarios in VST in order to increase the safety and reliabilityof ADS and ADAS.
This paper introduces the "Transfusor" model, which leverages the transformer and diffusor models (two cutting-edge deep learning generative technologies)
arXiv Detail & Related papers (2023-08-28T23:50:36Z) - TransVisDrone: Spatio-Temporal Transformer for Vision-based
Drone-to-Drone Detection in Aerial Videos [57.92385818430939]
Drone-to-drone detection using visual feed has crucial applications, such as detecting drone collisions, detecting drone attacks, or coordinating flight with other drones.
Existing methods are computationally costly, follow non-end-to-end optimization, and have complex multi-stage pipelines, making them less suitable for real-time deployment on edge devices.
We propose a simple yet effective framework, itTransVisDrone, that provides an end-to-end solution with higher computational efficiency.
arXiv Detail & Related papers (2022-10-16T03:05:13Z) - Inverted Landing in a Small Aerial Robot via Deep Reinforcement Learning
for Triggering and Control of Rotational Maneuvers [11.29285364660789]
Inverted landing in a rapid and robust manner is a challenging feat for aerial robots, especially while depending entirely on onboard sensing and computation.
Previous work has identified a direct causal connection between a series of onboard visual cues and kinematic actions that allow for reliable execution of this challenging aerobatic maneuver in small aerial robots.
In this work, we first utilized Deep Reinforcement Learning and a physics-based simulation to obtain a general, optimal control policy for robust inverted landing.
arXiv Detail & Related papers (2022-09-22T14:38:10Z) - Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds [96.74836678572582]
We present a learning-based approach that allows rapid online adaptation by incorporating pretrained representations through deep learning.
Neural-Fly achieves precise flight control with substantially smaller tracking error than state-of-the-art nonlinear and adaptive controllers.
arXiv Detail & Related papers (2022-05-13T21:55:28Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - Learning High-Speed Flight in the Wild [101.33104268902208]
We propose an end-to-end approach that can autonomously fly quadrotors through complex natural and man-made environments at high speeds.
The key principle is to directly map noisy sensory observations to collision-free trajectories in a receding-horizon fashion.
By simulating realistic sensor noise, our approach achieves zero-shot transfer from simulation to challenging real-world environments.
arXiv Detail & Related papers (2021-10-11T09:43:11Z) - Model-Based Meta-Reinforcement Learning for Flight with Suspended
Payloads [69.21503033239985]
Transporting suspended payloads is challenging for autonomous aerial vehicles.
We propose a meta-learning approach that "learns how to learn" models of altered dynamics within seconds of post-connection flight data.
arXiv Detail & Related papers (2020-04-23T17:43:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.