Virtual Underwater Datasets for Autonomous Inspections
- URL: http://arxiv.org/abs/2209.06013v2
- Date: Wed, 14 Sep 2022 11:50:54 GMT
- Title: Virtual Underwater Datasets for Autonomous Inspections
- Authors: Ioannis Polymenis, Maryam Haroutunian, Rose Norman, David Trodden
- Abstract summary: This study builds a bespoke dataset from photographs of items captured in a laboratory environment.
Generative Adversarial Networks (GANs) were utilised to translate the laboratory object dataset into the underwater domain.
The resulting images closely resembled the real underwater environment when compared with real-world underwater ship hull images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Underwater Vehicles have become more sophisticated, driven by the off-shore
sector and the scientific community's rapid advancements in underwater
operations. Notably, many underwater tasks, including the assessment of subsea
infrastructure, are performed with the assistance of Autonomous Underwater
Vehicles (AUVs). There have been recent breakthroughs in Artificial
Intelligence (AI) and, notably, Deep Learning (DL) models and applications,
which have widespread usage in a variety of fields, including aerial unmanned
vehicles, autonomous car navigation, and other applications. However, they are
not as prevalent in underwater applications due to the difficulty of obtaining
underwater datasets for a specific application. In this sense, the current
study utilises recent advancements in the area of DL to construct a bespoke
dataset generated from photographs of items captured in a laboratory
environment. Generative Adversarial Networks (GANs) were utilised to translate
the laboratory object dataset into the underwater domain by combining the
collected images with photographs containing the underwater environment. The
findings demonstrated the feasibility of creating such a dataset, since the
resulting images closely resembled the real underwater environment when
compared with real-world underwater ship hull images. Therefore, the artificial
datasets of the underwater environment can overcome the difficulties arising
from the limited access to real-world underwater images and are used to enhance
underwater operations through underwater object image classification and
detection.
Related papers
- Diving into Underwater: Segment Anything Model Guided Underwater Salient Instance Segmentation and A Large-scale Dataset [60.14089302022989]
Underwater vision tasks often suffer from low segmentation accuracy due to the complex underwater circumstances.
We construct the first large-scale underwater salient instance segmentation dataset (USIS10K)
We propose an Underwater Salient Instance architecture based on Segment Anything Model (USIS-SAM) specifically for the underwater domain.
arXiv Detail & Related papers (2024-06-10T06:17:33Z) - Physics-Inspired Synthesized Underwater Image Dataset [9.959844922120528]
PHISWID is a dataset tailored for enhancing underwater image processing through physics-inspired image synthesis.
Our results reveal that even a basic U-Net architecture, when trained with PHISWID, substantially outperforms existing methods in underwater image enhancement.
We intend to release PHISWID publicly, contributing a significant resource to the advancement of underwater imaging technology.
arXiv Detail & Related papers (2024-04-05T10:23:10Z) - Atlantis: Enabling Underwater Depth Estimation with Stable Diffusion [30.122666238416716]
We propose a novel pipeline for generating underwater images using accurate terrestrial depth data.
This approach facilitates the training of supervised models for underwater depth estimation.
We introduce a unique Depth2Underwater ControlNet, trained on specially prepared Underwater, Depth, Text data triplets.
arXiv Detail & Related papers (2023-12-19T08:56:33Z) - An Efficient Detection and Control System for Underwater Docking using
Machine Learning and Realistic Simulation: A Comprehensive Approach [5.039813366558306]
This work compares different deep-learning architectures to perform underwater docking detection and classification.
A Generative Adversarial Network (GAN) is used to do image-to-image translation, converting the Gazebo simulation image into an underwater-looking image.
Results show an improvement of 20% in the high turbidity scenarios regardless of the underwater currents.
arXiv Detail & Related papers (2023-11-02T18:10:20Z) - Improving Underwater Visual Tracking With a Large Scale Dataset and
Image Enhancement [70.2429155741593]
This paper presents a new dataset and general tracker enhancement method for Underwater Visual Object Tracking (UVOT)
It poses distinct challenges; the underwater environment exhibits non-uniform lighting conditions, low visibility, lack of sharpness, low contrast, camouflage, and reflections from suspended particles.
We propose a novel underwater image enhancement algorithm designed specifically to boost tracking quality.
The method has resulted in a significant performance improvement, of up to 5.0% AUC, of state-of-the-art (SOTA) visual trackers.
arXiv Detail & Related papers (2023-08-30T07:41:26Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - Towards Generating Large Synthetic Phytoplankton Datasets for Efficient
Monitoring of Harmful Algal Blooms [77.25251419910205]
Harmful algal blooms (HABs) cause significant fish deaths in aquaculture farms.
Currently, the standard method to enumerate harmful algae and other phytoplankton is to manually observe and count them under a microscope.
We employ Generative Adversarial Networks (GANs) to generate synthetic images.
arXiv Detail & Related papers (2022-08-03T20:15:55Z) - A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and
Ground Truth [61.90504318229845]
This paper introduces the first paired real image benchmark dataset with hazy and haze-free images, and in-situ haze density measurements.
This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene.
A subset of this dataset has been used for the Object Detection in Haze Track of CVPR UG2 2022 challenge.
arXiv Detail & Related papers (2022-06-13T19:14:06Z) - Underwater Light Field Retention : Neural Rendering for Underwater
Imaging [6.22867695581195]
Underwater Image Rendering aims to generate a true-to-life underwater image from a given clean one.
We propose a neural rendering method for underwater imaging, dubbed UWNR (Underwater Neural Rendering).
arXiv Detail & Related papers (2022-03-21T14:22:05Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Deep Sea Robotic Imaging Simulator [6.2122699483618]
The largest portion of the ocean - the deep sea - still remains mostly unexplored.
Deep sea images are very different from the images taken in shallow waters and this area did not get much attention from the community.
This paper presents a physical model-based image simulation solution, which uses an in-air texture and depth information as inputs.
arXiv Detail & Related papers (2020-06-27T16:18:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.