Image-Based Relocalization and Alignment for Long-Term Monitoring of Dynamic Underwater Environments
- URL: http://arxiv.org/abs/2503.04096v1
- Date: Thu, 06 Mar 2025 05:13:19 GMT
- Title: Image-Based Relocalization and Alignment for Long-Term Monitoring of Dynamic Underwater Environments
- Authors: Beverley Gorry, Tobias Fischer, Michael Milford, Alejandro Fontan,
- Abstract summary: We propose an integrated pipeline that combines Visual Place Recognition (VPR), feature matching, and image segmentation on video-derived images.<n>This method enables robust identification of revisited areas, estimation of rigid transformations, and downstream analysis of ecosystem changes.
- Score: 57.59857784298534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effective monitoring of underwater ecosystems is crucial for tracking environmental changes, guiding conservation efforts, and ensuring long-term ecosystem health. However, automating underwater ecosystem management with robotic platforms remains challenging due to the complexities of underwater imagery, which pose significant difficulties for traditional visual localization methods. We propose an integrated pipeline that combines Visual Place Recognition (VPR), feature matching, and image segmentation on video-derived images. This method enables robust identification of revisited areas, estimation of rigid transformations, and downstream analysis of ecosystem changes. Furthermore, we introduce the SQUIDLE+ VPR Benchmark-the first large-scale underwater VPR benchmark designed to leverage an extensive collection of unstructured data from multiple robotic platforms, spanning time intervals from days to years. The dataset encompasses diverse trajectories, arbitrary overlap and diverse seafloor types captured under varying environmental conditions, including differences in depth, lighting, and turbidity. Our code is available at: https://github.com/bev-gorry/underloc
Related papers
- Learning Underwater Active Perception in Simulation [51.205673783866146]
Turbidity can jeopardise the whole mission as it may prevent correct visual documentation of the inspected structures.
Previous works have introduced methods to adapt to turbidity and backscattering.
We propose a simple yet efficient approach to enable high-quality image acquisition of assets in a broad range of water conditions.
arXiv Detail & Related papers (2025-04-23T06:48:38Z) - Real-time Seafloor Segmentation and Mapping [0.0]
Posidonia oceanica meadows are a species of seagrass highly dependent on rocks for their survival and conservation.
Deep learning-based semantic segmentation and visual automated monitoring systems have shown promise in a variety of applications.
This paper introduces a framework that combines machine learning and computer vision techniques to enable an autonomous underwater vehicle (AUV) to inspect the boundaries of Posidonia oceanica meadows autonomously.
arXiv Detail & Related papers (2025-04-14T22:49:08Z) - Inland Waterway Object Detection in Multi-environment: Dataset and Approach [12.00732943849236]
This paper introduces the Multi-environment Inland Waterway Vessel dataset (MEIWVD)
MEIWVD comprises 32,478 high-quality images from diverse scenarios, including sunny, rainy, foggy, and artificial lighting conditions.
This paper proposes a scene-guided image enhancement module to improve water surface images based on environmental conditions adaptively.
arXiv Detail & Related papers (2025-04-07T08:45:00Z) - AquaticCLIP: A Vision-Language Foundation Model for Underwater Scene Analysis [40.27548815196493]
We introduce AquaticCLIP, a novel contrastive language-image pre-training model tailored for aquatic scene understanding.<n> AquaticCLIP presents a new unsupervised learning framework that aligns images and texts in aquatic environments.<n>Our model sets a new benchmark for vision-language applications in underwater environments.
arXiv Detail & Related papers (2025-02-03T19:56:16Z) - UW-SDF: Exploiting Hybrid Geometric Priors for Neural SDF Reconstruction from Underwater Multi-view Monocular Images [63.32490897641344]
We propose a framework for reconstructing target objects from multi-view underwater images based on neural SDF.
We introduce hybrid geometric priors to optimize the reconstruction process, markedly enhancing the quality and efficiency of neural SDF reconstruction.
arXiv Detail & Related papers (2024-10-10T16:33:56Z) - On Vision Transformers for Classification Tasks in Side-Scan Sonar Imagery [0.0]
Side-scan sonar (SSS) imagery presents unique challenges in the classification of man-made objects on the seafloor.
This paper rigorously compares the performance of ViT models alongside commonly used CNN architectures for binary classification tasks in SSS imagery.
ViT-based models exhibit superior classification performance across f1-score, precision, recall, and accuracy metrics.
arXiv Detail & Related papers (2024-09-18T14:36:50Z) - ODYSSEE: Oyster Detection Yielded by Sensor Systems on Edge Electronics [14.935296890629795]
Oysters are a vital keystone species in coastal ecosystems, providing significant economic, environmental, and cultural benefits.<n>Current monitoring strategies often rely on destructive methods.<n>We propose a novel pipeline using stable diffusion to augment a collected real dataset with realistic synthetic data.
arXiv Detail & Related papers (2024-09-11T04:31:09Z) - Diving into Underwater: Segment Anything Model Guided Underwater Salient Instance Segmentation and A Large-scale Dataset [60.14089302022989]
Underwater vision tasks often suffer from low segmentation accuracy due to the complex underwater circumstances.
We construct the first large-scale underwater salient instance segmentation dataset (USIS10K)
We propose an Underwater Salient Instance architecture based on Segment Anything Model (USIS-SAM) specifically for the underwater domain.
arXiv Detail & Related papers (2024-06-10T06:17:33Z) - Automatic Coral Detection with YOLO: A Deep Learning Approach for Efficient and Accurate Coral Reef Monitoring [0.0]
Coral reefs are vital ecosystems that are under increasing threat due to local human impacts and climate change.
In this paper, we present an automatic coral detection system utilizing the You Only Look Once deep learning model.
arXiv Detail & Related papers (2024-04-03T08:00:46Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - DeepAqua: Self-Supervised Semantic Segmentation of Wetland Surface Water
Extent with SAR Images using Knowledge Distillation [44.99833362998488]
We present DeepAqua, a self-supervised deep learning model that eliminates the need for manual annotations during the training phase.
We exploit cases where optical- and radar-based water masks coincide, enabling the detection of both open and vegetated water surfaces.
Experimental results show that DeepAqua outperforms other unsupervised methods by improving accuracy by 7%, Intersection Over Union by 27%, and F1 score by 14%.
arXiv Detail & Related papers (2023-05-02T18:06:21Z) - FLSea: Underwater Visual-Inertial and Stereo-Vision Forward-Looking
Datasets [8.830479021890575]
We have collected underwater forward-looking stereo-vision and visual-inertial image sets in the Mediterranean and Red Sea.
These datasets are critical for the development of several underwater applications, including obstacle avoidance, visual odometry, 3D tracking, Simultaneous localization and Mapping (SLAM) and depth estimation.
arXiv Detail & Related papers (2023-02-24T17:39:53Z) - OmniSLAM: Omnidirectional Localization and Dense Mapping for
Wide-baseline Multi-camera Systems [88.41004332322788]
We present an omnidirectional localization and dense mapping system for a wide-baseline multiview stereo setup with ultra-wide field-of-view (FOV) fisheye cameras.
For more practical and accurate reconstruction, we first introduce improved and light-weighted deep neural networks for the omnidirectional depth estimation.
We integrate our omnidirectional depth estimates into the visual odometry (VO) and add a loop closing module for global consistency.
arXiv Detail & Related papers (2020-03-18T05:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.