A Generative Data Framework with Authentic Supervision for Underwater Image Restoration and Enhancement
- URL: http://arxiv.org/abs/2511.14521v1
- Date: Tue, 18 Nov 2025 14:20:17 GMT
- Title: A Generative Data Framework with Authentic Supervision for Underwater Image Restoration and Enhancement
- Authors: Yufeng Tian, Yifan Chen, Zhe Sun, Libang Chen, Mingyu Dou, Jijun Lu, Ye Zheng, Xuelong Li,
- Abstract summary: We develop a generative data framework based on unpaired image-to-image translation.<n>The framework constructs synthetic datasets with precise ground-truth labels.<n>Experiments show that models trained on our synthetic data achieve comparable or superior color restoration and generalization performance to those trained on existing benchmarks.
- Score: 51.382274157144714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Underwater image restoration and enhancement are crucial for correcting color distortion and restoring image details, thereby establishing a fundamental basis for subsequent underwater visual tasks. However, current deep learning methodologies in this area are frequently constrained by the scarcity of high-quality paired datasets. Since it is difficult to obtain pristine reference labels in underwater scenes, existing benchmarks often rely on manually selected results from enhancement algorithms, providing debatable reference images that lack globally consistent color and authentic supervision. This limits the model's capabilities in color restoration, image enhancement, and generalization. To overcome this limitation, we propose using in-air natural images as unambiguous reference targets and translating them into underwater-degraded versions, thereby constructing synthetic datasets that provide authentic supervision signals for model learning. Specifically, we establish a generative data framework based on unpaired image-to-image translation, producing a large-scale dataset that covers 6 representative underwater degradation types. The framework constructs synthetic datasets with precise ground-truth labels, which facilitate the learning of an accurate mapping from degraded underwater images to their pristine scene appearances. Extensive quantitative and qualitative experiments across 6 representative network architectures and 3 independent test sets show that models trained on our synthetic data achieve comparable or superior color restoration and generalization performance to those trained on existing benchmarks. This research provides a reliable and scalable data-driven solution for underwater image restoration and enhancement. The generated dataset is publicly available at: https://github.com/yftian2025/SynUIEDatasets.git.
Related papers
- DACA-Net: A Degradation-Aware Conditional Diffusion Network for Underwater Image Enhancement [16.719513778795367]
Underwater images typically suffer from severe colour distortions, low visibility, and reduced structural clarity due to complex optical effects such as scattering and absorption.<n>Existing enhancement methods often struggle to adaptively handle diverse degradation conditions and fail to leverage underwater-specific physical priors effectively.<n>We propose a degradation-aware conditional diffusion model to enhance underwater images adaptively and robustly.
arXiv Detail & Related papers (2025-07-30T09:16:07Z) - AerialMegaDepth: Learning Aerial-Ground Reconstruction and View Synthesis [57.249817395828174]
We propose a scalable framework combining pseudo-synthetic renderings from 3D city-wide meshes with real, ground-level crowd-sourced images.<n>The pseudo-synthetic data simulates a wide range of aerial viewpoints, while the real, crowd-sourced images help improve visual fidelity for ground-level images.<n>Using this hybrid dataset, we fine-tune several state-of-the-art algorithms and achieve significant improvements on real-world, zero-shot aerial-ground tasks.
arXiv Detail & Related papers (2025-04-17T17:57:05Z) - Bridging Knowledge Gap Between Image Inpainting and Large-Area Visible Watermark Removal [57.84348166457113]
We introduce a novel feature adapting framework that leverages the representation capacity of a pre-trained image inpainting model.<n>Our approach bridges the knowledge gap between image inpainting and watermark removal by fusing information of the residual background content beneath watermarks into the inpainting backbone model.<n>For relieving the dependence on high-quality watermark masks, we introduce a new training paradigm by utilizing coarse watermark masks to guide the inference process.
arXiv Detail & Related papers (2025-04-07T02:37:14Z) - UniUIR: Considering Underwater Image Restoration as An All-in-One Learner [62.65503609562905]
We propose a Universal Underwater Image Restoration method, termed as UniUIR.<n>To decouple degradation-specific issues and explore the inter-correlations among various degradations in UIR task, we designed the Mamba Mixture-of-Experts module.<n>This module extracts degradation prior information in both spatial and frequency domains, and adaptively selects the most appropriate task-specific prompts.
arXiv Detail & Related papers (2025-01-22T16:10:42Z) - Underwater Image Restoration Through a Prior Guided Hybrid Sense Approach and Extensive Benchmark Analysis [37.544713547176855]
The framework operates on multiple scales, employing the proposed textbfDetail Restorer module to restore low-level detailed features.<n>We construct a benchmark using paired training data from three real-world underwater datasets.<n>We tested 14 traditional and retrained 23 deep learning existing underwater image restoration methods on this benchmark, obtaining metric results for each approach.
arXiv Detail & Related papers (2025-01-06T01:06:37Z) - PHISWID: Physics-Inspired Underwater Image Dataset Synthesized from RGB-D Images [9.117162374919715]
This paper introduces the physics-inspired synthesized underwater image dataset (PHISWID)<n>It is a dataset tailored for enhancing underwater image processing through physics-inspired image synthesis.<n>Our dataset contributes to the development in underwater image processing.
arXiv Detail & Related papers (2024-04-05T10:23:10Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - MetaUE: Model-based Meta-learning for Underwater Image Enhancement [25.174894007563374]
This paper proposes a model-based deep learning method for restoring clean images under various underwater scenarios.
The meta-learning strategy is used to obtain a pre-trained model on the synthetic underwater dataset.
The model is then fine-tuned on real underwater datasets to obtain a reliable underwater image enhancement model, called MetaUE.
arXiv Detail & Related papers (2023-03-12T02:38:50Z) - Adaptive deep learning framework for robust unsupervised underwater image enhancement [3.0516727053033392]
One of the main challenges in deep learning-based underwater image enhancement is the limited availability of high-quality training data.<n>We propose a novel unsupervised underwater image enhancement framework that employs a conditional variational autoencoder (cVAE) to train a deep learning model.<n>We show that our proposed framework yields competitive performance compared to other state-of-the-art approaches in quantitative as well as qualitative metrics.
arXiv Detail & Related papers (2022-12-18T01:07:20Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.