Monocular visual autonomous landing system for quadcopter drones using
software in the loop
- URL: http://arxiv.org/abs/2108.06616v1
- Date: Sat, 14 Aug 2021 21:28:28 GMT
- Title: Monocular visual autonomous landing system for quadcopter drones using
software in the loop
- Authors: Miguel Saavedra-Ruiz, Ana Mario Pinto-Vargas, Victor Romero-Cano
- Abstract summary: A monocular vision-only approach to landing pad tracking made it possible to effectively implement the system in an F450 quadcopter drone.
The proposed monocular vision-only approach to landing pad tracking made it possible to effectively implement the system in an F450 quadcopter drone with the standard computational capabilities of an Odroid XU4 embedded processor.
- Score: 0.696125353550498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous landing is a capability that is essential to achieve the full
potential of multi-rotor drones in many social and industrial applications. The
implementation and testing of this capability on physical platforms is risky
and resource-intensive; hence, in order to ensure both a sound design process
and a safe deployment, simulations are required before implementing a physical
prototype. This paper presents the development of a monocular visual system,
using a software-in-the-loop methodology, that autonomously and efficiently
lands a quadcopter drone on a predefined landing pad, thus reducing the risks
of the physical testing stage. In addition to ensuring that the autonomous
landing system as a whole fulfils the design requirements using a Gazebo-based
simulation, our approach provides a tool for safe parameter tuning and design
testing prior to physical implementation. Finally, the proposed monocular
vision-only approach to landing pad tracking made it possible to effectively
implement the system in an F450 quadcopter drone with the standard
computational capabilities of an Odroid XU4 embedded processor.
Related papers
- A Safer Vision-based Autonomous Planning System for Quadrotor UAVs with
Dynamic Obstacle Trajectory Prediction and Its Application with LLMs [6.747468447244154]
This paper proposes a vision-based planning system that combines tracking and trajectory prediction of dynamic obstacles to achieve efficient and reliable autonomous flight.
We conduct experiments in both simulation and real-world environments, and the results indicate that our approach can successfully detect and avoid obstacles in dynamic environments in real-time.
arXiv Detail & Related papers (2023-11-21T08:09:00Z) - OmniDrones: An Efficient and Flexible Platform for Reinforcement
Learning in Drone Control [16.570253723823996]
We introduce OmniDrones, an efficient and flexible platform tailored for reinforcement learning in drone control.
It employs a bottom-up design approach that allows users to easily design and experiment with various application scenarios.
It also offers a range of benchmark tasks, presenting challenges ranging from single-drone hovering to over-actuated system tracking.
arXiv Detail & Related papers (2023-09-22T12:26:36Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - TransVisDrone: Spatio-Temporal Transformer for Vision-based
Drone-to-Drone Detection in Aerial Videos [57.92385818430939]
Drone-to-drone detection using visual feed has crucial applications, such as detecting drone collisions, detecting drone attacks, or coordinating flight with other drones.
Existing methods are computationally costly, follow non-end-to-end optimization, and have complex multi-stage pipelines, making them less suitable for real-time deployment on edge devices.
We propose a simple yet effective framework, itTransVisDrone, that provides an end-to-end solution with higher computational efficiency.
arXiv Detail & Related papers (2022-10-16T03:05:13Z) - Towards a Fully Autonomous UAV Controller for Moving Platform Detection
and Landing [2.7909470193274593]
We present an autonomous UAV landing system for landing on a moving platform.
The proposed system relies only on the camera sensor, and has been designed as lightweight as possible.
The system was evaluated with an average deviation of 15cm from the center of the target, for 40 landing attempts.
arXiv Detail & Related papers (2022-09-30T09:16:04Z) - Learning a Single Near-hover Position Controller for Vastly Different
Quadcopters [56.37274861303324]
This paper proposes an adaptive near-hover position controller for quadcopters.
It can be deployed to quadcopters of very different mass, size and motor constants.
It also shows rapid adaptation to unknown disturbances during runtime.
arXiv Detail & Related papers (2022-09-19T17:55:05Z) - Visual-based Safe Landing for UAVs in Populated Areas: Real-time
Validation in Virtual Environments [0.0]
We propose a framework for real-time safe and thorough evaluation of vision-based autonomous landing in populated scenarios.
We propose to use the Unreal graphics engine coupled with the AirSim plugin for drone's simulation.
We study two different criteria for selecting the "best" SLZ, and evaluate them during autonomous landing of a virtual drone in different scenarios.
arXiv Detail & Related papers (2022-03-25T17:22:24Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z) - Developmental Reinforcement Learning of Control Policy of a Quadcopter
UAV with Thrust Vectoring Rotors [1.0057838324294686]
We present a novel developmental reinforcement learning-based controller for a quadcopter with thrust vectoring capabilities.
The control policy of this robot is learned using the policy transfer from the learned controller of the quadcopter.
The performance of the learned policy is evaluated by physics-based simulations for the tasks of hovering and way-point navigation.
arXiv Detail & Related papers (2020-07-15T16:17:29Z) - AirSim Drone Racing Lab [56.68291351736057]
AirSim Drone Racing Lab is a simulation framework for enabling machine learning research in this domain.
Our framework enables generation of racing tracks in multiple photo-realistic environments.
We used our framework to host a simulation based drone racing competition at NeurIPS 2019.
arXiv Detail & Related papers (2020-03-12T08:06:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.