Underwater inspection and intervention dataset
- URL: http://arxiv.org/abs/2107.13628v1
- Date: Wed, 28 Jul 2021 20:29:14 GMT
- Title: Underwater inspection and intervention dataset
- Authors: Tomasz Luczynski, Jonatan Scharff Willners, Elizabeth Vargas, Joshua
Roe, Shida Xu, Yu Cao, Yvan Petillot and Sen Wang
- Abstract summary: This paper presents a novel dataset for the development of visual navigation and simultaneous localisation and mapping (SLAM) algorithms.
It differs from existing datasets as it contains ground truth for the vehicle's position captured by an underwater motion tracking system.
- Score: 10.761773141001626
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper presents a novel dataset for the development of visual navigation
and simultaneous localisation and mapping (SLAM) algorithms as well as for
underwater intervention tasks. It differs from existing datasets as it contains
ground truth for the vehicle's position captured by an underwater motion
tracking system. The dataset contains distortion-free and rectified stereo
images along with the calibration parameters of the stereo camera setup.
Furthermore, the experiments were performed and recorded in a controlled
environment, where current and waves could be generated allowing the dataset to
cover a wide range of conditions - from calm water to waves and currents of
significant strength.
Related papers
- Dataset of polarimetric images of mechanically generated water surface waves coupled with surface elevation records by wave gauges linear array [2.3599126081503177]
Existing techniques are often cumbersome and generally suffer from limited wave/frequency response.
To address these challenges a novel method was developed, using polarization filter as camera equipped the main sensor and Machine Learning (number) algorithms for data processing.
The developed method training and evaluation was based on in-house made supervised dataset.
arXiv Detail & Related papers (2024-10-30T09:35:27Z) - Attenuation-Aware Weighted Optical Flow with Medium Transmission Map for Learning-based Visual Odometry in Underwater terrain [0.03749861135832072]
This paper addresses the challenge of improving learning-based monocular visual odometry (VO) in underwater environments.
The novel wflow-TartanVO is introduced, enhancing the accuracy of VO systems for autonomous underwater vehicles (AUVs)
Evaluation of different real-world underwater datasets demonstrates the outperformance of wflow-TartanVO over baseline VO methods.
arXiv Detail & Related papers (2024-07-18T05:00:15Z) - Improving Underwater Visual Tracking With a Large Scale Dataset and
Image Enhancement [70.2429155741593]
This paper presents a new dataset and general tracker enhancement method for Underwater Visual Object Tracking (UVOT)
It poses distinct challenges; the underwater environment exhibits non-uniform lighting conditions, low visibility, lack of sharpness, low contrast, camouflage, and reflections from suspended particles.
We propose a novel underwater image enhancement algorithm designed specifically to boost tracking quality.
The method has resulted in a significant performance improvement, of up to 5.0% AUC, of state-of-the-art (SOTA) visual trackers.
arXiv Detail & Related papers (2023-08-30T07:41:26Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - WaterScenes: A Multi-Task 4D Radar-Camera Fusion Dataset and Benchmarks for Autonomous Driving on Water Surfaces [12.755813310009179]
WaterScenes is the first multi-task 4D radar-camera fusion dataset for autonomous driving on water surfaces.
Our Unmanned Surface Vehicle (USV) proffers all-weather solutions for discerning object-related information.
arXiv Detail & Related papers (2023-07-13T01:05:12Z) - Multimodal Dataset from Harsh Sub-Terranean Environment with Aerosol
Particles for Frontier Exploration [55.41644538483948]
This paper introduces a multimodal dataset from the harsh and unstructured underground environment with aerosol particles.
It contains synchronized raw data measurements from all onboard sensors in Robot Operating System (ROS) format.
The focus of this paper is not only to capture both temporal and spatial data diversities but also to present the impact of harsh conditions on captured data.
arXiv Detail & Related papers (2023-04-27T20:21:18Z) - FLSea: Underwater Visual-Inertial and Stereo-Vision Forward-Looking
Datasets [8.830479021890575]
We have collected underwater forward-looking stereo-vision and visual-inertial image sets in the Mediterranean and Red Sea.
These datasets are critical for the development of several underwater applications, including obstacle avoidance, visual odometry, 3D tracking, Simultaneous localization and Mapping (SLAM) and depth estimation.
arXiv Detail & Related papers (2023-02-24T17:39:53Z) - Learning-based estimation of in-situ wind speed from underwater
acoustics [58.293528982012255]
We introduce a deep learning approach for the retrieval of wind speed time series from underwater acoustics.
Our approach bridges data assimilation and learning-based frameworks to benefit both from prior physical knowledge and computational efficiency.
arXiv Detail & Related papers (2022-08-18T15:27:40Z) - A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and
Ground Truth [61.90504318229845]
This paper introduces the first paired real image benchmark dataset with hazy and haze-free images, and in-situ haze density measurements.
This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene.
A subset of this dataset has been used for the Object Detection in Haze Track of CVPR UG2 2022 challenge.
arXiv Detail & Related papers (2022-06-13T19:14:06Z) - Dimensions of Motion: Learning to Predict a Subspace of Optical Flow
from a Single Image [50.9686256513627]
We introduce the problem of predicting, from a single video frame, a low-dimensional subspace of optical flow which includes the actual instantaneous optical flow.
We show how several natural scene assumptions allow us to identify an appropriate flow subspace via a set of basis flow fields parameterized by disparity.
This provides a new approach to learning these tasks in an unsupervised fashion using monocular input video without requiring camera intrinsics or poses.
arXiv Detail & Related papers (2021-12-02T18:52:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.