BenthicNet: A global compilation of seafloor images for deep learning applications
- URL: http://arxiv.org/abs/2405.05241v2
- Date: Thu, 11 Jul 2024 16:24:52 GMT
- Title: BenthicNet: A global compilation of seafloor images for deep learning applications
- Authors: Scott C. Lowe, Benjamin Misiuk, Isaac Xu, Shakhboz Abdulazizov, Amit R. Baroi, Alex C. Bastos, Merlin Best, Vicki Ferrini, Ariell Friedman, Deborah Hart, Ove Hoegh-Guldberg, Daniel Ierodiaconou, Julia Mackin-McLaughlin, Kathryn Markey, Pedro S. Menandro, Jacquomo Monk, Shreya Nemani, John O'Brien, Elizabeth Oh, Luba Y. Reshitnyk, Katleen Robert, Chris M. Roelfsema, Jessica A. Sameoto, Alexandre C. G. Schimel, Jordan A. Thomson, Brittany R. Wilson, Melisa C. Wong, Craig J. Brown, Thomas Trappenberg,
- Abstract summary: BenthicNet is a global compilation of seafloor imagery.
An initial set of over 11.4 million images was collected and curated to represent a diversity of seafloor environments.
A large deep learning model was trained on this compilation and preliminary results suggest it has utility for automating large and small-scale image analysis tasks.
- Score: 25.466405216505166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advances in underwater imaging enable the collection of extensive seafloor image datasets that are necessary for monitoring important benthic ecosystems. The ability to collect seafloor imagery has outpaced our capacity to analyze it, hindering expedient mobilization of this crucial environmental information. Recent machine learning approaches provide opportunities to increase the efficiency with which seafloor image datasets are analyzed, yet large and consistent datasets necessary to support development of such approaches are scarce. Here we present BenthicNet: a global compilation of seafloor imagery designed to support the training and evaluation of large-scale image recognition models. An initial set of over 11.4 million images was collected and curated to represent a diversity of seafloor environments using a representative subset of 1.3 million images. These are accompanied by 2.6 million annotations translated to the CATAMI scheme, which span 190,000 of the images. A large deep learning model was trained on this compilation and preliminary results suggest it has utility for automating large and small-scale image analysis tasks. The compilation and model are made openly available for use by the scientific community at https://doi.org/10.20383/103.0614.
Related papers
- Real-time Seafloor Segmentation and Mapping [0.0]
Posidonia oceanica meadows are a species of seagrass highly dependent on rocks for their survival and conservation.
Deep learning-based semantic segmentation and visual automated monitoring systems have shown promise in a variety of applications.
This paper introduces a framework that combines machine learning and computer vision techniques to enable an autonomous underwater vehicle (AUV) to inspect the boundaries of Posidonia oceanica meadows autonomously.
arXiv Detail & Related papers (2025-04-14T22:49:08Z) - Image-Based Relocalization and Alignment for Long-Term Monitoring of Dynamic Underwater Environments [57.59857784298534]
We propose an integrated pipeline that combines Visual Place Recognition (VPR), feature matching, and image segmentation on video-derived images.
This method enables robust identification of revisited areas, estimation of rigid transformations, and downstream analysis of ecosystem changes.
arXiv Detail & Related papers (2025-03-06T05:13:19Z) - From underwater to aerial: a novel multi-scale knowledge distillation approach for coral reef monitoring [1.0644791181419937]
This study presents a novel multi-scale approach to coral reef monitoring, integrating fine-scale underwater imagery with medium-scale aerial imagery.
A transformer-based deep-learning model is trained on underwater images to detect the presence of 31 classes covering various coral morphotypes, associated fauna, and habitats.
The results show that the multi-scale methodology successfully extends fine-scale classification to larger reef areas, achieving a high degree of accuracy in predicting coral morphotypes and associated habitats.
arXiv Detail & Related papers (2025-02-25T06:12:33Z) - Back Home: A Machine Learning Approach to Seashell Classification and Ecosystem Restoration [49.1574468325115]
In Costa Rica, an average of 5 tons of seashells are extracted from ecosystems annually. Confiscated seashells, cannot be returned to their ecosystems due to the lack of origin recognition.
We developed a convolutional neural network (CNN) specifically for seashell identification.
We built a dataset from scratch, consisting of approximately 19000 images from the Pacific and Caribbean coasts.
The model has been integrated into a user-friendly application, which has classified over 36,000 seashells to date, delivering real-time results within 3 seconds per image.
arXiv Detail & Related papers (2025-01-08T23:07:10Z) - SeafloorAI: A Large-scale Vision-Language Dataset for Seafloor Geological Survey [11.642711706384212]
We introduce SeafloorAI, the first extensive AI-ready datasets for seafloor mapping across 5 geological layers.
The dataset consists of 62 geo-distributed data surveys spanning 17,300 square kilometers, with 696K sonar images, 827K annotated segmentation masks, 696K detailed language descriptions.
arXiv Detail & Related papers (2024-10-31T19:37:47Z) - UW-SDF: Exploiting Hybrid Geometric Priors for Neural SDF Reconstruction from Underwater Multi-view Monocular Images [63.32490897641344]
We propose a framework for reconstructing target objects from multi-view underwater images based on neural SDF.
We introduce hybrid geometric priors to optimize the reconstruction process, markedly enhancing the quality and efficiency of neural SDF reconstruction.
arXiv Detail & Related papers (2024-10-10T16:33:56Z) - Diving into Underwater: Segment Anything Model Guided Underwater Salient Instance Segmentation and A Large-scale Dataset [60.14089302022989]
Underwater vision tasks often suffer from low segmentation accuracy due to the complex underwater circumstances.
We construct the first large-scale underwater salient instance segmentation dataset (USIS10K)
We propose an Underwater Salient Instance architecture based on Segment Anything Model (USIS-SAM) specifically for the underwater domain.
arXiv Detail & Related papers (2024-06-10T06:17:33Z) - Scalable Semantic 3D Mapping of Coral Reefs with Deep Learning [4.8902950939676675]
This paper presents a new paradigm for mapping underwater environments from ego-motion video.
We show high-precision 3D semantic mapping at unprecedented scale with significantly reduced required labor costs.
Our approach significantly scales up coral reef monitoring by taking a leap towards fully automatic analysis of video transects.
arXiv Detail & Related papers (2023-09-22T11:35:10Z) - Delving Deeper into Data Scaling in Masked Image Modeling [145.36501330782357]
We conduct an empirical study on the scaling capability of masked image modeling (MIM) methods for visual recognition.
Specifically, we utilize the web-collected Coyo-700M dataset.
Our goal is to investigate how the performance changes on downstream tasks when scaling with different sizes of data and models.
arXiv Detail & Related papers (2023-05-24T15:33:46Z) - Guided deep learning by subaperture decomposition: ocean patterns from
SAR imagery [36.922471841100176]
Sentinel 1 SAR wave mode vignettes have made possible to capture many important oceanic and atmospheric phenomena since 2014.
In this study, we propose to apply subaperture decomposition as a preprocessing stage for SAR deep learning models.
arXiv Detail & Related papers (2022-04-09T09:49:05Z) - Highly Accurate Dichotomous Image Segmentation [139.79513044546]
A new task called dichotomous image segmentation (DIS) aims to segment highly accurate objects from natural images.
We collect the first large-scale dataset, DIS5K, which contains 5,470 high-resolution (e.g., 2K, 4K or larger) images.
We also introduce a simple intermediate supervision baseline (IS-Net) using both feature-level and mask-level guidance for DIS model training.
arXiv Detail & Related papers (2022-03-06T20:09:19Z) - FathomNet: A global underwater image training set for enabling
artificial intelligence in the ocean [0.0]
Ocean-going platforms are integrating high-resolution camera feeds for observation and navigation, producing a deluge of visual data.
Recent advances in machine learning enable fast, sophisticated analysis of visual data, but have had limited success in the oceanographic world.
We will demonstrate how machine learning models trained on FathomNet data can be applied across different institutional video data.
arXiv Detail & Related papers (2021-09-29T18:08:42Z) - Object Detection in Aerial Images: A Large-Scale Benchmark and
Challenges [124.48654341780431]
We present a large-scale dataset of Object deTection in Aerial images (DOTA) and comprehensive baselines for ODAI.
The proposed DOTA dataset contains 1,793,658 object instances of 18 categories of oriented-bounding-box annotations collected from 11,268 aerial images.
We build baselines covering 10 state-of-the-art algorithms with over 70 configurations, where the speed and accuracy performances of each model have been evaluated.
arXiv Detail & Related papers (2021-02-24T11:20:55Z) - Deep learning for lithological classification of carbonate rock micro-CT
images [52.77024349608834]
This work intends to present an application of deep learning techniques to identify patterns in Brazilian pre-salt carbonate rock microtomographic images.
Four convolutional neural network models were proposed.
According to accuracy, Model 2 trained on resized images achieved the best results, reaching an average of 75.54% for the first evaluation approach and an average of 81.33% for the second.
arXiv Detail & Related papers (2020-07-30T19:14:00Z) - FathomNet: An underwater image training database for ocean exploration
and discovery [0.0]
FathomNet is a novel baseline image training set optimized to accelerate development of modern, intelligent, and automated analysis of underwater imagery.
To date, there are more than 80,000 images and 106,000 localizations for 233 different classes, including midwater and benthic organisms.
While we find quality results on prediction for this new dataset, our results indicate that we are ultimately in need of a larger data set for ocean exploration.
arXiv Detail & Related papers (2020-06-30T21:23:06Z) - Semantic Segmentation of Underwater Imagery: Dataset and Benchmark [13.456412091502527]
We present the first large-scale dataset for semantic analysis of Underwater IMagery (SUIM)
It contains over 1500 images with pixel annotations for eight object categories: fish (vertebrates), reefs (invertebrates), aquatic plants, wrecks/ruins, human divers, robots, and sea-floor.
We also present a benchmark evaluation of state-of-the-art semantic segmentation approaches based on standard performance metrics.
arXiv Detail & Related papers (2020-04-02T19:53:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.