A Generative Approach for Detection-driven Underwater Image Enhancement
- URL: http://arxiv.org/abs/2012.05990v1
- Date: Thu, 10 Dec 2020 21:33:12 GMT
- Title: A Generative Approach for Detection-driven Underwater Image Enhancement
- Authors: Chelsey Edge, Md Jahidul Islam, Christopher Morse, Junaed Sattar
- Abstract summary: We present a model that integrates generative adversarial network (GAN)-based image enhancement with diver detection task.
Our proposed approach restructures the GAN objective function to include information from a pre-trained diver detector.
We train our network on a large dataset of scuba divers, using a state-of-the-art diver detector, and demonstrate its utility on images collected from oceanic explorations.
- Score: 19.957923413999673
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce a generative model for image enhancement
specifically for improving diver detection in the underwater domain. In
particular, we present a model that integrates generative adversarial network
(GAN)-based image enhancement with the diver detection task. Our proposed
approach restructures the GAN objective function to include information from a
pre-trained diver detector with the goal to generate images which would enhance
the accuracy of the detector in adverse visual conditions. By incorporating the
detector output into both the generator and discriminator networks, our model
is able to focus on enhancing images beyond aesthetic qualities and
specifically to improve robotic detection of scuba divers. We train our network
on a large dataset of scuba divers, using a state-of-the-art diver detector,
and demonstrate its utility on images collected from oceanic explorations of
human-robot teams. Experimental evaluations demonstrate that our approach
significantly improves diver detection performance over raw, unenhanced images,
and even outperforms detection performance on the output of state-of-the-art
underwater image enhancement algorithms. Finally, we demonstrate the inference
performance of our network on embedded devices to highlight the feasibility of
operating on board mobile robotic platforms.
Related papers
- DA-HFNet: Progressive Fine-Grained Forgery Image Detection and Localization Based on Dual Attention [12.36906630199689]
We construct a DA-HFNet forged image dataset guided by text or image-assisted GAN and Diffusion model.
Our goal is to utilize a hierarchical progressive network to capture forged artifacts at different scales for detection and localization.
arXiv Detail & Related papers (2024-06-03T16:13:33Z) - Research on Detection of Floating Objects in River and Lake Based on AI Intelligent Image Recognition [12.315852697312195]
This study focuses on the detection of floating objects in river and lake environments, exploring an innovative approach based on deep learning.
The proposed system has demonstrated its ability to significantly enhance the accuracy and efficiency of debris detection, thus offering a new technological avenue for water quality monitoring in rivers and lakes.
arXiv Detail & Related papers (2024-04-10T10:13:37Z) - DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - WaterFlow: Heuristic Normalizing Flow for Underwater Image Enhancement
and Beyond [52.27796682972484]
Existing underwater image enhancement methods mainly focus on image quality improvement, ignoring the effect on practice.
We propose a normalizing flow for detection-driven underwater image enhancement, dubbed WaterFlow.
Considering the differentiability and interpretability, we incorporate the prior into the data-driven mapping procedure.
arXiv Detail & Related papers (2023-08-02T04:17:35Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - GAMMA: Generative Augmentation for Attentive Marine Debris Detection [0.0]
We propose an efficient and generative augmentation approach to solve the inadequacy concern of underwater debris data for visual detection.
We use cycleGAN as a data augmentation technique to convert openly available, abundant data of terrestrial plastic to underwater-style images.
We also propose a novel architecture for underwater debris detection using an attention mechanism.
arXiv Detail & Related papers (2022-12-07T16:30:51Z) - Semantic-aware Texture-Structure Feature Collaboration for Underwater
Image Enhancement [58.075720488942125]
Underwater image enhancement has become an attractive topic as a significant technology in marine engineering and aquatic robotics.
We develop an efficient and compact enhancement network in collaboration with a high-level semantic-aware pretrained model.
We also apply the proposed algorithm to the underwater salient object detection task to reveal the favorable semantic-aware ability for high-level vision tasks.
arXiv Detail & Related papers (2022-11-19T07:50:34Z) - Perceptual underwater image enhancement with deep learning and physical
priors [35.37760003463292]
We propose two perceptual enhancement models, each of which uses a deep enhancement model with a detection perceptor.
Due to the lack of training data, a hybrid underwater image synthesis model, which fuses physical priors and data-driven cues, is proposed to synthesize training data.
Experimental results show the superiority of our proposed method over several state-of-the-art methods on both real-world and synthetic underwater datasets.
arXiv Detail & Related papers (2020-08-21T22:11:34Z) - A Benchmark dataset for both underwater image enhancement and underwater
object detection [34.25890702670983]
We provide a large-scale underwater object detection dataset with both bounding box annotations and high quality reference images.
The OUC dataset provides a platform to comprehensive study the influence of underwater image enhancement algorithms on the underwater object detection task.
arXiv Detail & Related papers (2020-06-29T03:12:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.