Vision-based control for landing an aerial vehicle on a marine vessel
- URL: http://arxiv.org/abs/2404.11336v1
- Date: Wed, 17 Apr 2024 12:53:57 GMT
- Title: Vision-based control for landing an aerial vehicle on a marine vessel
- Authors: Haohua Dong,
- Abstract summary: This work addresses the landing problem of an aerial vehicle, exemplified by a simple quadrotor, on a moving platform using image-based visual servo control.
The image features on the textured target plane are exploited to derive a vision-based control law.
The proposed control law guarantees convergence without estimating the unknown distance between the target and the moving platform.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work addresses the landing problem of an aerial vehicle, exemplified by a simple quadrotor, on a moving platform using image-based visual servo control. First, the mathematical model of the quadrotor aircraft is introduced, followed by the design of the inner-loop control. At the second stage, the image features on the textured target plane are exploited to derive a vision-based control law. The image of the spherical centroid of a set of landmarks present in the landing target is used as a position measurement, whereas the translational optical flow is used as velocity measurement. The kinematics of the vision-based system is expressed in terms of the observable features, and the proposed control law guarantees convergence without estimating the unknown distance between the vision system and the target, which is also guaranteed to remain strictly positive, avoiding undesired collisions. The performance of the proposed control law is evaluated in MATLAB and 3-D simulation software Gazebo. Simulation results for a quadrotor UAV are provided for different velocity profiles of the moving target, showcasing the robustness of the proposed controller.
Related papers
- Initialization of Monocular Visual Navigation for Autonomous Agents Using Modified Structure from Small Motion [13.69678622755871]
We propose a standalone monocular visual Simultaneous Localization and Mapping (vSLAM) pipeline for autonomous space robots.
Our method, a state-of-the-art factor graph optimization pipeline, extends Structure from Small Motion to robustly initialize a monocular agent in spacecraft inspection trajectories.
We validate our approach on realistic, simulated satellite inspection image sequences with a tumbling spacecraft and demonstrate the method's effectiveness.
arXiv Detail & Related papers (2024-09-24T21:33:14Z) - Angle Robustness Unmanned Aerial Vehicle Navigation in GNSS-Denied
Scenarios [66.05091704671503]
We present a novel angle navigation paradigm to deal with flight deviation in point-to-point navigation tasks.
We also propose a model that includes the Adaptive Feature Enhance Module, Cross-knowledge Attention-guided Module and Robust Task-oriented Head Module.
arXiv Detail & Related papers (2024-02-04T08:41:20Z) - Vision-Based Autonomous Navigation for Unmanned Surface Vessel in
Extreme Marine Conditions [2.8983738640808645]
This paper presents an autonomous vision-based navigation framework for tracking target objects in extreme marine conditions.
The proposed framework has been thoroughly tested in simulation under extremely reduced visibility due to sandstorms and fog.
The results are compared with state-of-the-art de-hazing methods across the benchmarked MBZIRC simulation dataset.
arXiv Detail & Related papers (2023-08-08T14:25:13Z) - Monocular BEV Perception of Road Scenes via Front-to-Top View Projection [57.19891435386843]
We present a novel framework that reconstructs a local map formed by road layout and vehicle occupancy in the bird's-eye view.
Our model runs at 25 FPS on a single GPU, which is efficient and applicable for real-time panorama HD map reconstruction.
arXiv Detail & Related papers (2022-11-15T13:52:41Z) - Lateral Ego-Vehicle Control without Supervision using Point Clouds [50.40632021583213]
Existing vision based supervised approaches to lateral vehicle control are capable of directly mapping RGB images to the appropriate steering commands.
This paper proposes a framework for training a more robust and scalable model for lateral vehicle control.
Online experiments show that the performance of our method is superior to that of the supervised model.
arXiv Detail & Related papers (2022-03-20T21:57:32Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - MorphEyes: Variable Baseline Stereo For Quadrotor Navigation [13.830987813403018]
We present a framework for quadrotor navigation based on a stereo camera system whose baseline can be adapted on-the-fly.
We show that our variable baseline system is more accurate and robust in all three scenarios.
arXiv Detail & Related papers (2020-11-05T20:04:35Z) - Perceiving Traffic from Aerial Images [86.994032967469]
We propose an object detection method called Butterfly Detector that is tailored to detect objects in aerial images.
We evaluate our Butterfly Detector on two publicly available UAV datasets (UAVDT and VisDrone 2019) and show that it outperforms previous state-of-the-art methods while remaining real-time.
arXiv Detail & Related papers (2020-09-16T11:37:43Z) - Distributed Variable-Baseline Stereo SLAM from two UAVs [17.513645771137178]
In this article, we employ two UAVs equipped with one monocular camera and one IMU each, to exploit their view overlap and relative distance measurements.
In order to control the glsuav agents autonomously, we propose a decentralized collaborative estimation scheme.
We demonstrate the effectiveness of the approach at high altitude flights of up to 160m, going significantly beyond the capabilities of state-of-the-art VIO methods.
arXiv Detail & Related papers (2020-09-10T12:16:10Z) - Transition control of a tail-sitter UAV using recurrent neural networks [80.91076033926224]
The control strategy is based on attitude and velocity stabilization.
The RNN is used for the estimation of high nonlinear aerodynamic terms.
Results show convergence of linear velocities and the pitch angle during the transition maneuver.
arXiv Detail & Related papers (2020-06-29T21:33:30Z) - Control Design of Autonomous Drone Using Deep Learning Based Image
Understanding Techniques [1.0953917735844645]
This paper presents a new framework to use images as the inputs for the controller to have autonomous flight, considering the noisy indoor environment and uncertainties.
A new Proportional-Integral-Derivative-Accelerated (PIDA) control with a derivative filter is proposed to improve drone/quadcopter flight stability within a noisy environment.
arXiv Detail & Related papers (2020-04-27T15:50:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.