SafeSea: Synthetic Data Generation for Adverse & Low Probability
  Maritime Conditions
        - URL: http://arxiv.org/abs/2311.14764v1
- Date: Fri, 24 Nov 2023 01:10:12 GMT
- Title: SafeSea: Synthetic Data Generation for Adverse & Low Probability
  Maritime Conditions
- Authors: Martin Tran, Jordan Shipard, Hermawan Mulyono, Arnold Wiliem, Clinton
  Fookes
- Abstract summary: We introduce SafeSea, which is a stepping stone towards transforming actual sea images with various Sea State backgrounds.
This approach reduces the time and effort required to create synthetic datasets for training maritime object detection models.
- Score: 24.312671086207228
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract:   High-quality training data is essential for enhancing the robustness of
object detection models. Within the maritime domain, obtaining a diverse real
image dataset is particularly challenging due to the difficulty of capturing
sea images with the presence of maritime objects , especially in stormy
conditions. These challenges arise due to resource limitations, in addition to
the unpredictable appearance of maritime objects. Nevertheless, acquiring data
from stormy conditions is essential for training effective maritime detection
models, particularly for search and rescue, where real-world conditions can be
unpredictable. In this work, we introduce SafeSea, which is a stepping stone
towards transforming actual sea images with various Sea State backgrounds while
retaining maritime objects. Compared to existing generative methods such as
Stable Diffusion Inpainting~\cite{stableDiffusion}, this approach reduces the
time and effort required to create synthetic datasets for training maritime
object detection models. The proposed method uses two automated filters to only
pass generated images that meet the criteria. In particular, these filters will
first classify the sea condition according to its Sea State level and then it
will check whether the objects from the input image are still preserved. This
method enabled the creation of the SafeSea dataset, offering diverse weather
condition backgrounds to supplement the training of maritime models. Lastly, we
observed that a maritime object detection model faced challenges in detecting
objects in stormy sea backgrounds, emphasizing the impact of weather conditions
on detection accuracy. The code, and dataset are available at
https://github.com/martin-3240/SafeSea.
 
      
        Related papers
        - Learning Underwater Active Perception in Simulation [51.205673783866146]
 Turbidity can jeopardise the whole mission as it may prevent correct visual documentation of the inspected structures.
Previous works have introduced methods to adapt to turbidity and backscattering.
We propose a simple yet efficient approach to enable high-quality image acquisition of assets in a broad range of water conditions.
 arXiv  Detail & Related papers  (2025-04-23T06:48:38Z)
- IBURD: Image Blending for Underwater Robotic Detection [17.217395753087157]
 IBURD generates both images of underwater debris and their pixel-level annotations.
IBURD is able to robustly blend transparent objects into arbitrary backgrounds.
 arXiv  Detail & Related papers  (2025-02-24T22:56:49Z)
- Domain Adaptation from Generated Multi-Weather Images for Unsupervised   Maritime Object Classification [34.59086771834456]
 We construct a dataset named AIMO with diverse weather conditions and balanced object categories.
We propose a novel domain adaptation approach that leverages AIMO (source domain) to address the problem of limited labeled data.
 Experimental results show that the proposed method significantly improves the classification accuracy.
 arXiv  Detail & Related papers  (2025-01-26T12:27:54Z)
- FAFA: Frequency-Aware Flow-Aided Self-Supervision for Underwater Object   Pose Estimation [65.01601309903971]
 We introduce FAFA, a Frequency-Aware Flow-Aided self-supervised framework for 6D pose estimation of unmanned underwater vehicles (UUVs)
Our framework relies solely on the 3D model and RGB images, alleviating the need for any real pose annotations or other-modality data like depths.
We evaluate the effectiveness of FAFA on common underwater object pose benchmarks and showcase significant performance improvements compared to state-of-the-art methods.
 arXiv  Detail & Related papers  (2024-09-25T03:54:01Z)
- Introducing VaDA: Novel Image Segmentation Model for Maritime Object   Segmentation Using New Dataset [3.468621550644668]
 The maritime shipping industry is undergoing rapid evolution driven by advancements in computer vision artificial intelligence (AI)
 object recognition in maritime environments faces challenges such as light reflection, interference, intense lighting, and various weather conditions.
Existing AI recognition models and datasets have limited suitability for composing autonomous navigation systems.
 arXiv  Detail & Related papers  (2024-07-12T05:48:53Z)
- A Computer Vision Approach to Estimate the Localized Sea State [45.498315114762484]
 This research focuses on utilizing sea images in operational envelopes captured by a single stationary camera mounted on the ship bridge.
The collected images are used to train a deep learning model to automatically recognize the state of the sea based on the Beaufort scale.
 arXiv  Detail & Related papers  (2024-07-04T09:07:25Z)
- A deep learning approach for marine snow synthesis and removal [55.86191108738564]
 This paper proposes a novel method to reduce the marine snow interference using deep learning techniques.
We first synthesize realistic marine snow samples by training a Generative Adversarial Network (GAN) model.
We then train a U-Net model to perform marine snow removal as an image to image translation task.
 arXiv  Detail & Related papers  (2023-11-27T07:19:41Z)
- Camouflaged Image Synthesis Is All You Need to Boost Camouflaged
  Detection [65.8867003376637]
 We propose a framework for synthesizing camouflage data to enhance the detection of camouflaged objects in natural scenes.
Our approach employs a generative model to produce realistic camouflage images, which can be used to train existing object detection models.
Our framework outperforms the current state-of-the-art method on three datasets.
 arXiv  Detail & Related papers  (2023-08-13T06:55:05Z)
- Large-scale Detection of Marine Debris in Coastal Areas with Sentinel-2 [3.6842260407632903]
 Efforts to quantify marine pollution are often conducted with sparse and expensive beach surveys.
Satellite data of coastal areas is readily available and can be leveraged to detect aggregations of marine debris containing plastic litter.
We present a detector for marine debris built on a deep segmentation model that outputs a probability for marine debris at the pixel level.
 arXiv  Detail & Related papers  (2023-07-05T17:38:48Z)
- KOLOMVERSE: Korea open large-scale image dataset for object detection in   the maritime universe [0.5732204366512352]
 We present KOLOMVERSE, an open large-scale image dataset for object detection in the maritime domain by KRISO.
We collected 5,845 hours of video data captured from 21 territorial waters of South Korea.
The dataset has images of 3840$times$2160 pixels and to our knowledge, it is by far the largest publicly available dataset for object detection in the maritime domain.
 arXiv  Detail & Related papers  (2022-06-20T16:45:12Z)
- Dual Branch Neural Network for Sea Fog Detection in Geostationary Ocean
  Color Imager [10.518441342599422]
 This paper develops a sea fog dataset (SFDD) and a dual branch sea fog detection network (DB-SFNet)
We investigate all the observed sea fog events in the Yellow Sea and the Bohai Sea from 2010 to 2020.
DB-SFNet is superior in detection performance and stability, particularly in the mixed cloud and fog areas.
 arXiv  Detail & Related papers  (2022-05-04T14:01:38Z)
- Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
  Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
 Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
 arXiv  Detail & Related papers  (2021-07-14T21:10:47Z)
- Generating Physically-Consistent Satellite Imagery for Climate   Visualizations [53.61991820941501]
 We train a generative adversarial network to create synthetic satellite imagery of future flooding and reforestation events.
A pure deep learning-based model can generate flood visualizations but hallucinates floods at locations that were not susceptible to flooding.
We publish our code and dataset for segmentation guided image-to-image translation in Earth observation.
 arXiv  Detail & Related papers  (2021-04-10T15:00:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.