Lander.AI: Adaptive Landing Behavior Agent for Expertise in 3D Dynamic
Platform Landings
- URL: http://arxiv.org/abs/2403.06572v2
- Date: Tue, 12 Mar 2024 10:46:33 GMT
- Title: Lander.AI: Adaptive Landing Behavior Agent for Expertise in 3D Dynamic
Platform Landings
- Authors: Robinroy Peter, Lavanya Ratnabala, Demetros Aschu, Aleksey Fedoseev,
Dzmitry Tsetserukou
- Abstract summary: This study introduces an advanced Deep Reinforcement Learning (DRL) agent, Lander:AI, designed to navigate and land on platforms in the presence of windy conditions.
Lander:AI is rigorously trained within the gym-pybullet-drone simulation, an environment that mirrors real-world complexities, including wind turbulence.
The experimental results showcased Lander:AI's high-precision landing and its ability to adapt to moving platforms, even under wind-induced disturbances.
- Score: 2.5022287664959446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mastering autonomous drone landing on dynamic platforms presents formidable
challenges due to unpredictable velocities and external disturbances caused by
the wind, ground effect, turbines or propellers of the docking platform. This
study introduces an advanced Deep Reinforcement Learning (DRL) agent,
Lander:AI, designed to navigate and land on platforms in the presence of windy
conditions, thereby enhancing drone autonomy and safety. Lander:AI is
rigorously trained within the gym-pybullet-drone simulation, an environment
that mirrors real-world complexities, including wind turbulence, to ensure the
agent's robustness and adaptability.
The agent's capabilities were empirically validated with Crazyflie 2.1 drones
across various test scenarios, encompassing both simulated environments and
real-world conditions. The experimental results showcased Lander:AI's
high-precision landing and its ability to adapt to moving platforms, even under
wind-induced disturbances. Furthermore, the system performance was benchmarked
against a baseline PID controller augmented with an Extended Kalman Filter,
illustrating significant improvements in landing precision and error recovery.
Lander:AI leverages bio-inspired learning to adapt to external forces like
birds, enhancing drone adaptability without knowing force magnitudes.This
research not only advances drone landing technologies, essential for inspection
and emergency applications, but also highlights the potential of DRL in
addressing intricate aerodynamic challenges.
Related papers
- Motion Control in Multi-Rotor Aerial Robots Using Deep Reinforcement Learning [0.0]
This paper investigates the application of Deep Reinforcement (DRL) Learning to address motion control challenges in drones for additive manufacturing (AM)
We propose a DRL framework that learns adaptable control policies for multi-rotor drones performing waypoint navigation in AM tasks.
arXiv Detail & Related papers (2025-02-09T19:00:16Z) - A Cross-Scene Benchmark for Open-World Drone Active Tracking [54.235808061746525]
Drone Visual Active Tracking aims to autonomously follow a target object by controlling the motion system based on visual observations.
We propose a unified cross-scene cross-domain benchmark for open-world drone active tracking called DAT.
We also propose a reinforcement learning-based drone tracking method called R-VAT.
arXiv Detail & Related papers (2024-12-01T09:37:46Z) - DroneWiS: Automated Simulation Testing of small Unmanned Aerial Systems in Realistic Windy Conditions [8.290044674335473]
DroneWiS allows sUAS developers to automatically simulate realistic windy conditions and test the resilience of sUAS against wind.
Unlike current state-of-the-art simulation tools such as Gazebo and AirSim, DroneWiS leverages Computational Fluid Dynamics (CFD) to compute the unique wind flows.
This simulation capability provides deeper insights to developers about the navigation capability of sUAS in challenging and realistic windy conditions.
arXiv Detail & Related papers (2024-08-29T14:25:11Z) - DATT: Deep Adaptive Trajectory Tracking for Quadrotor Control [62.24301794794304]
Deep Adaptive Trajectory Tracking (DATT) is a learning-based approach that can precisely track arbitrary, potentially infeasible trajectories in the presence of large disturbances in the real world.
DATT significantly outperforms competitive adaptive nonlinear and model predictive controllers for both feasible smooth and infeasible trajectories in unsteady wind fields.
It can efficiently run online with an inference time less than 3.2 ms, less than 1/4 of the adaptive nonlinear model predictive control baseline.
arXiv Detail & Related papers (2023-10-13T12:22:31Z) - Urban Drone Navigation: Autoencoder Learning Fusion for Aerodynamics [2.868732757372218]
This paper presents a method that combines multi-objective reinforcement learning (MORL) with a convolutional autoencoder to improve drone navigation in urban SAR.
The approach uses MORL to achieve multiple goals and the autoencoder for cost-effective wind simulations.
Tested on a New York City model, this method enhances drone SAR operations in complex urban settings.
arXiv Detail & Related papers (2023-10-13T02:57:35Z) - Transfusor: Transformer Diffusor for Controllable Human-like Generation
of Vehicle Lane Changing Trajectories [0.3314882635954752]
The virtual simulation test (VST) has become a prominent approach for testing autonomous driving systems (ADS) and advanced driver assistance systems (ADAS)
It is needed to create more flexible and high-fidelity testing scenarios in VST in order to increase the safety and reliabilityof ADS and ADAS.
This paper introduces the "Transfusor" model, which leverages the transformer and diffusor models (two cutting-edge deep learning generative technologies)
arXiv Detail & Related papers (2023-08-28T23:50:36Z) - Inverted Landing in a Small Aerial Robot via Deep Reinforcement Learning
for Triggering and Control of Rotational Maneuvers [11.29285364660789]
Inverted landing in a rapid and robust manner is a challenging feat for aerial robots, especially while depending entirely on onboard sensing and computation.
Previous work has identified a direct causal connection between a series of onboard visual cues and kinematic actions that allow for reliable execution of this challenging aerobatic maneuver in small aerial robots.
In this work, we first utilized Deep Reinforcement Learning and a physics-based simulation to obtain a general, optimal control policy for robust inverted landing.
arXiv Detail & Related papers (2022-09-22T14:38:10Z) - Neural-Fly Enables Rapid Learning for Agile Flight in Strong Winds [96.74836678572582]
We present a learning-based approach that allows rapid online adaptation by incorporating pretrained representations through deep learning.
Neural-Fly achieves precise flight control with substantially smaller tracking error than state-of-the-art nonlinear and adaptive controllers.
arXiv Detail & Related papers (2022-05-13T21:55:28Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - Learning High-Speed Flight in the Wild [101.33104268902208]
We propose an end-to-end approach that can autonomously fly quadrotors through complex natural and man-made environments at high speeds.
The key principle is to directly map noisy sensory observations to collision-free trajectories in a receding-horizon fashion.
By simulating realistic sensor noise, our approach achieves zero-shot transfer from simulation to challenging real-world environments.
arXiv Detail & Related papers (2021-10-11T09:43:11Z) - Model-Based Meta-Reinforcement Learning for Flight with Suspended
Payloads [69.21503033239985]
Transporting suspended payloads is challenging for autonomous aerial vehicles.
We propose a meta-learning approach that "learns how to learn" models of altered dynamics within seconds of post-connection flight data.
arXiv Detail & Related papers (2020-04-23T17:43:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.