Vision-Based Autonomous Navigation for Unmanned Surface Vessel in
Extreme Marine Conditions
- URL: http://arxiv.org/abs/2308.04283v1
- Date: Tue, 8 Aug 2023 14:25:13 GMT
- Title: Vision-Based Autonomous Navigation for Unmanned Surface Vessel in
Extreme Marine Conditions
- Authors: Muhayyuddin Ahmed, Ahsan Baidar Bakht, Taimur Hassan, Waseem Akram,
Ahmed Humais, Lakmal Seneviratne, Shaoming He, Defu Lin, and Irfan Hussain
- Abstract summary: This paper presents an autonomous vision-based navigation framework for tracking target objects in extreme marine conditions.
The proposed framework has been thoroughly tested in simulation under extremely reduced visibility due to sandstorms and fog.
The results are compared with state-of-the-art de-hazing methods across the benchmarked MBZIRC simulation dataset.
- Score: 2.8983738640808645
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visual perception is an important component for autonomous navigation of
unmanned surface vessels (USV), particularly for the tasks related to
autonomous inspection and tracking. These tasks involve vision-based navigation
techniques to identify the target for navigation. Reduced visibility under
extreme weather conditions in marine environments makes it difficult for
vision-based approaches to work properly. To overcome these issues, this paper
presents an autonomous vision-based navigation framework for tracking target
objects in extreme marine conditions. The proposed framework consists of an
integrated perception pipeline that uses a generative adversarial network (GAN)
to remove noise and highlight the object features before passing them to the
object detector (i.e., YOLOv5). The detected visual features are then used by
the USV to track the target. The proposed framework has been thoroughly tested
in simulation under extremely reduced visibility due to sandstorms and fog. The
results are compared with state-of-the-art de-hazing methods across the
benchmarked MBZIRC simulation dataset, on which the proposed scheme has
outperformed the existing methods across various metrics.
Related papers
- OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Angle Robustness Unmanned Aerial Vehicle Navigation in GNSS-Denied
Scenarios [66.05091704671503]
We present a novel angle navigation paradigm to deal with flight deviation in point-to-point navigation tasks.
We also propose a model that includes the Adaptive Feature Enhance Module, Cross-knowledge Attention-guided Module and Robust Task-oriented Head Module.
arXiv Detail & Related papers (2024-02-04T08:41:20Z) - Deep Learning-Based Object Detection in Maritime Unmanned Aerial Vehicle
Imagery: Review and Experimental Comparisons [10.75221614844458]
We first briefly summarize four challenges for object detection on maritime UAVs, i.e. object feature diversity, device limitation, maritime environment variability, and dataset scarcity.
Next, we review the UAV aerial image/video datasets and propose a maritime UAV aerial dataset named MS2ship for ship detection.
arXiv Detail & Related papers (2023-11-14T07:20:38Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - AVOIDDS: Aircraft Vision-based Intruder Detection Dataset and Simulator [37.579437595742995]
We introduce AVOIDDS, a realistic object detection benchmark for the vision-based aircraft detect-and-avoid problem.
We provide a labeled dataset consisting of 72,000 photorealistic images of intruder aircraft with various lighting conditions.
We also provide an interface that evaluates trained models on slices of this dataset to identify changes in performance with respect to changing environmental conditions.
arXiv Detail & Related papers (2023-06-19T23:58:07Z) - Performance Study of YOLOv5 and Faster R-CNN for Autonomous Navigation
around Non-Cooperative Targets [0.0]
This paper discusses how the combination of cameras and machine learning algorithms can achieve the relative navigation task.
The performance of two deep learning-based object detection algorithms, Faster Region-based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLOv5) is tested.
The paper discusses the path to implementing the feature recognition algorithms and towards integrating them into the spacecraft Guidance Navigation and Control system.
arXiv Detail & Related papers (2023-01-22T04:53:38Z) - AVisT: A Benchmark for Visual Object Tracking in Adverse Visibility [125.77396380698639]
AVisT is a benchmark for visual tracking in diverse scenarios with adverse visibility.
AVisT comprises 120 challenging sequences with 80k annotated frames, spanning 18 diverse scenarios.
We benchmark 17 popular and recent trackers on AVisT with detailed analysis of their tracking performance across attributes.
arXiv Detail & Related papers (2022-08-14T17:49:37Z) - Safe Vessel Navigation Visually Aided by Autonomous Unmanned Aerial
Vehicles in Congested Harbors and Waterways [9.270928705464193]
This work is the first attempt to detect and estimate distances to unknown objects from long-range visual data captured with conventional RGB cameras and auxiliary absolute positioning systems (e.g. GPS)
The simulation results illustrate the accuracy and efficacy of the proposed method for visually aided navigation of vessels assisted by UAV.
arXiv Detail & Related papers (2021-08-09T08:15:17Z) - Perceiving Traffic from Aerial Images [86.994032967469]
We propose an object detection method called Butterfly Detector that is tailored to detect objects in aerial images.
We evaluate our Butterfly Detector on two publicly available UAV datasets (UAVDT and VisDrone 2019) and show that it outperforms previous state-of-the-art methods while remaining real-time.
arXiv Detail & Related papers (2020-09-16T11:37:43Z) - Object Goal Navigation using Goal-Oriented Semantic Exploration [98.14078233526476]
This work studies the problem of object goal navigation which involves navigating to an instance of the given object category in unseen environments.
We propose a modular system called, Goal-Oriented Semantic Exploration' which builds an episodic semantic map and uses it to explore the environment efficiently.
arXiv Detail & Related papers (2020-07-01T17:52:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.