FLSea: Underwater Visual-Inertial and Stereo-Vision Forward-Looking
Datasets
- URL: http://arxiv.org/abs/2302.12772v1
- Date: Fri, 24 Feb 2023 17:39:53 GMT
- Title: FLSea: Underwater Visual-Inertial and Stereo-Vision Forward-Looking
Datasets
- Authors: Yelena Randall and Tali Treibitz
- Abstract summary: We have collected underwater forward-looking stereo-vision and visual-inertial image sets in the Mediterranean and Red Sea.
These datasets are critical for the development of several underwater applications, including obstacle avoidance, visual odometry, 3D tracking, Simultaneous localization and Mapping (SLAM) and depth estimation.
- Score: 8.830479021890575
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Visibility underwater is challenging, and degrades as the distance between
the subject and camera increases, making vision tasks in the forward-looking
direction more difficult. We have collected underwater forward-looking
stereo-vision and visual-inertial image sets in the Mediterranean and Red Sea.
To our knowledge there are no other public datasets in the underwater
environment acquired with this camera-sensor orientation published with
ground-truth. These datasets are critical for the development of several
underwater applications, including obstacle avoidance, visual odometry, 3D
tracking, Simultaneous Localization and Mapping (SLAM) and depth estimation.
The stereo datasets include synchronized stereo images in dynamic underwater
environments with objects of known-size. The visual-inertial datasets contain
monocular images and IMU measurements, aligned with millisecond resolution
timestamps and objects of known size which were placed in the scene. Both
sensor configurations allow for scale estimation, with the calibrated baseline
in the stereo setup and the IMU in the visual-inertial setup. Ground truth
depth maps were created offline for both dataset types using photogrammetry.
The ground truth is validated with multiple known measurements placed
throughout the imaged environment. There are 5 stereo and 8 visual-inertial
datasets in total, each containing thousands of images, with a range of
different underwater visibility and ambient light conditions, natural and
man-made structures and dynamic camera motions. The forward-looking orientation
of the camera makes these datasets unique and ideal for testing underwater
obstacle-avoidance algorithms and for navigation close to the seafloor in
dynamic environments. With our datasets, we hope to encourage the advancement
of autonomous functionality for underwater vehicles in dynamic and/or shallow
water environments.
Related papers
- Amirkabir campus dataset: Real-world challenges and scenarios of Visual
Inertial Odometry (VIO) for visually impaired people [3.7998592843098336]
We introduce the Amirkabir campus dataset (AUT-VI) to address the mentioned problem and improve the navigation systems.
AUT-VI is a novel and super-challenging dataset with 126 diverse sequences in 17 different locations.
In support of ongoing development efforts, we have released the Android application for data capture to the public.
arXiv Detail & Related papers (2024-01-07T23:13:51Z) - Improving Underwater Visual Tracking With a Large Scale Dataset and
Image Enhancement [70.2429155741593]
This paper presents a new dataset and general tracker enhancement method for Underwater Visual Object Tracking (UVOT)
It poses distinct challenges; the underwater environment exhibits non-uniform lighting conditions, low visibility, lack of sharpness, low contrast, camouflage, and reflections from suspended particles.
We propose a novel underwater image enhancement algorithm designed specifically to boost tracking quality.
The method has resulted in a significant performance improvement, of up to 5.0% AUC, of state-of-the-art (SOTA) visual trackers.
arXiv Detail & Related papers (2023-08-30T07:41:26Z) - WaterScenes: A Multi-Task 4D Radar-Camera Fusion Dataset and Benchmarks for Autonomous Driving on Water Surfaces [12.755813310009179]
WaterScenes is the first multi-task 4D radar-camera fusion dataset for autonomous driving on water surfaces.
Our Unmanned Surface Vehicle (USV) proffers all-weather solutions for discerning object-related information.
arXiv Detail & Related papers (2023-07-13T01:05:12Z) - Design, Implementation and Evaluation of an External Pose-Tracking
System for Underwater Cameras [0.0]
This paper presents the conception, calibration and implementation of an external reference system for determining the underwater camera pose in real-time.
The approach, based on an HTC Vive tracking system in air, calculates the underwater camera pose by fusing the poses of two controllers tracked above the water surface of a tank.
arXiv Detail & Related papers (2023-05-07T09:15:47Z) - Beyond Visual Field of View: Perceiving 3D Environment with Echoes and
Vision [51.385731364529306]
This paper focuses on perceiving and navigating 3D environments using echoes and RGB image.
In particular, we perform depth estimation by fusing RGB image with echoes, received from multiple orientations.
We show that the echoes provide holistic and in-expensive information about the 3D structures complementing the RGB image.
arXiv Detail & Related papers (2022-07-03T22:31:47Z) - SurroundDepth: Entangling Surrounding Views for Self-Supervised
Multi-Camera Depth Estimation [101.55622133406446]
We propose a SurroundDepth method to incorporate the information from multiple surrounding views to predict depth maps across cameras.
Specifically, we employ a joint network to process all the surrounding views and propose a cross-view transformer to effectively fuse the information from multiple views.
In experiments, our method achieves the state-of-the-art performance on the challenging multi-camera depth estimation datasets.
arXiv Detail & Related papers (2022-04-07T17:58:47Z) - Self-Supervised Depth Completion for Active Stereo [55.79929735390945]
Active stereo systems are widely used in the robotics industry due to their low cost and high quality depth maps.
These depth sensors suffer from stereo artefacts and do not provide dense depth estimates.
We present the first self-supervised depth completion method for active stereo systems that predicts accurate dense depth maps.
arXiv Detail & Related papers (2021-10-07T07:33:52Z) - DnD: Dense Depth Estimation in Crowded Dynamic Indoor Scenes [68.38952377590499]
We present a novel approach for estimating depth from a monocular camera as it moves through complex indoor environments.
Our approach predicts absolute scale depth maps over the entire scene consisting of a static background and multiple moving people.
arXiv Detail & Related papers (2021-08-12T09:12:39Z) - Underwater inspection and intervention dataset [10.761773141001626]
This paper presents a novel dataset for the development of visual navigation and simultaneous localisation and mapping (SLAM) algorithms.
It differs from existing datasets as it contains ground truth for the vehicle's position captured by an underwater motion tracking system.
arXiv Detail & Related papers (2021-07-28T20:29:14Z) - Deep Sea Robotic Imaging Simulator [6.2122699483618]
The largest portion of the ocean - the deep sea - still remains mostly unexplored.
Deep sea images are very different from the images taken in shallow waters and this area did not get much attention from the community.
This paper presents a physical model-based image simulation solution, which uses an in-air texture and depth information as inputs.
arXiv Detail & Related papers (2020-06-27T16:18:32Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.