Physics-Inspired Synthesized Underwater Image Dataset
- URL: http://arxiv.org/abs/2404.03998v1
- Date: Fri, 5 Apr 2024 10:23:10 GMT
- Title: Physics-Inspired Synthesized Underwater Image Dataset
- Authors: Reina Kaneko, Hiroshi Higashi, Yuichi Tanaka,
- Abstract summary: PHISWID is a dataset tailored for enhancing underwater image processing through physics-inspired image synthesis.
Our results reveal that even a basic U-Net architecture, when trained with PHISWID, substantially outperforms existing methods in underwater image enhancement.
We intend to release PHISWID publicly, contributing a significant resource to the advancement of underwater imaging technology.
- Score: 9.959844922120528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces the physics-inspired synthesized underwater image dataset (PHISWID), a dataset tailored for enhancing underwater image processing through physics-inspired image synthesis. Deep learning approaches to underwater image enhancement typically demand extensive datasets, yet acquiring paired clean and degraded underwater ones poses significant challenges. While several underwater image datasets have been proposed using physics-based synthesis, a publicly accessible collection has been lacking. Additionally, most underwater image synthesis approaches do not intend to reproduce atmospheric scenes, resulting in incomplete enhancement. PHISWID addresses this gap by offering a set of paired ground-truth (atmospheric) and synthetically degraded underwater images, showcasing not only color degradation but also the often-neglected effects of marine snow, a composite of organic matter and sand particles that considerably impairs underwater image clarity. The dataset applies these degradations to atmospheric RGB-D images, enhancing the dataset's realism and applicability. PHISWID is particularly valuable for training deep neural networks in a supervised learning setting and for objectively assessing image quality in benchmark analyses. Our results reveal that even a basic U-Net architecture, when trained with PHISWID, substantially outperforms existing methods in underwater image enhancement. We intend to release PHISWID publicly, contributing a significant resource to the advancement of underwater imaging technology.
Related papers
- AerialMegaDepth: Learning Aerial-Ground Reconstruction and View Synthesis [57.249817395828174]
We propose a scalable framework combining pseudo-synthetic renderings from 3D city-wide meshes with real, ground-level crowd-sourced images.
The pseudo-synthetic data simulates a wide range of aerial viewpoints, while the real, crowd-sourced images help improve visual fidelity for ground-level images.
Using this hybrid dataset, we fine-tune several state-of-the-art algorithms and achieve significant improvements on real-world, zero-shot aerial-ground tasks.
arXiv Detail & Related papers (2025-04-17T17:57:05Z) - DPF-Net: Physical Imaging Model Embedded Data-Driven Underwater Image Enhancement [2.1953477234116705]
This research presents a two-stage underwater image enhancement network called the Data-Driven and Physical Parameters Fusion Network (DPF-Net)
It harnesses the robustness of physical imaging models alongside the generality and efficiency of data-driven methods.
Our proposed DPF-Net demonstrates superior performance compared to other benchmark methods across multiple test sets.
arXiv Detail & Related papers (2025-03-16T11:53:18Z) - Underwater Image Enhancement using Generative Adversarial Networks: A Survey [1.2582887633807602]
Generative Adversarial Networks (GANs) have emerged as a powerful tool for enhancing underwater photos.
GANs have been applied to real-world applications, including marine biology and ecosystem monitoring, coral reef health assessment, underwater archaeology, and autonomous underwater vehicle (AUV) navigation.
This paper explores all major approaches to underwater image enhancement, from physical and physics-free models to CNN-based models and state-of-the-art GAN-based methods.
arXiv Detail & Related papers (2025-01-10T06:41:19Z) - UW-SDF: Exploiting Hybrid Geometric Priors for Neural SDF Reconstruction from Underwater Multi-view Monocular Images [63.32490897641344]
We propose a framework for reconstructing target objects from multi-view underwater images based on neural SDF.
We introduce hybrid geometric priors to optimize the reconstruction process, markedly enhancing the quality and efficiency of neural SDF reconstruction.
arXiv Detail & Related papers (2024-10-10T16:33:56Z) - Enhancing Underwater Imaging with 4-D Light Fields: Dataset and Method [77.80712860663886]
4-D light fields (LFs) enhance underwater imaging plagued by light absorption, scattering, and other challenges.
We propose a progressive framework for underwater 4-D LF image enhancement and depth estimation.
We construct the first 4-D LF-based underwater image dataset for quantitative evaluation and supervised training of learning-based methods.
arXiv Detail & Related papers (2024-08-30T15:06:45Z) - Physics Informed and Data Driven Simulation of Underwater Images via
Residual Learning [5.095097384893417]
In general, underwater images suffer from color distortion and low contrast, because light is attenuated and backscattered as it propagates through water.
An existing simple degradation model (similar to atmospheric image "hazing" effects) is not sufficient to properly represent the underwater image degradation.
We propose a deep learning-based architecture to automatically simulate the underwater effects.
arXiv Detail & Related papers (2024-02-07T21:53:28Z) - Atlantis: Enabling Underwater Depth Estimation with Stable Diffusion [30.122666238416716]
We propose a novel pipeline for generating underwater images using accurate terrestrial depth data.
This approach facilitates the training of supervised models for underwater depth estimation.
We introduce a unique Depth2Underwater ControlNet, trained on specially prepared Underwater, Depth, Text data triplets.
arXiv Detail & Related papers (2023-12-19T08:56:33Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - MetaUE: Model-based Meta-learning for Underwater Image Enhancement [25.174894007563374]
This paper proposes a model-based deep learning method for restoring clean images under various underwater scenarios.
The meta-learning strategy is used to obtain a pre-trained model on the synthetic underwater dataset.
The model is then fine-tuned on real underwater datasets to obtain a reliable underwater image enhancement model, called MetaUE.
arXiv Detail & Related papers (2023-03-12T02:38:50Z) - WaterNeRF: Neural Radiance Fields for Underwater Scenes [6.161668246821327]
We advance state-of-the-art in neural radiance fields (NeRFs) to enable physics-informed dense depth estimation and color correction.
Our proposed method, WaterNeRF, estimates parameters of a physics-based model for underwater image formation.
We can produce novel views of degraded as well as corrected underwater images, along with dense depth of the scene.
arXiv Detail & Related papers (2022-09-27T00:53:26Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Generating Physically-Consistent Satellite Imagery for Climate Visualizations [53.61991820941501]
We train a generative adversarial network to create synthetic satellite imagery of future flooding and reforestation events.
A pure deep learning-based model can generate flood visualizations but hallucinates floods at locations that were not susceptible to flooding.
We publish our code and dataset for segmentation guided image-to-image translation in Earth observation.
arXiv Detail & Related papers (2021-04-10T15:00:15Z) - Domain Adaptive Adversarial Learning Based on Physics Model Feedback for
Underwater Image Enhancement [10.143025577499039]
We propose a new robust adversarial learning framework via physics model based feedback control and domain adaptation mechanism for enhancing underwater images.
A new method for simulating underwater-like training dataset from RGB-D data by underwater image formation model is proposed.
Final enhanced results on synthetic and real underwater images demonstrate the superiority of the proposed method.
arXiv Detail & Related papers (2020-02-20T07:50:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.