A Gated Cross-domain Collaborative Network for Underwater Object
Detection
- URL: http://arxiv.org/abs/2306.14141v1
- Date: Sun, 25 Jun 2023 06:28:28 GMT
- Title: A Gated Cross-domain Collaborative Network for Underwater Object
Detection
- Authors: Linhui Dai, Hong Liu, Pinhao Song, Mengyuan Liu
- Abstract summary: Underwater object detection plays a significant role in aquaculture and marine environmental protection.
Several underwater image enhancement (UIE) methods have been proposed to improve the quality of underwater images.
We propose a Gated Cross-domain Collaborative Network (GCC-Net) to address the challenges of poor visibility and low contrast in underwater environments.
- Score: 14.715181402435066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Underwater object detection (UOD) plays a significant role in aquaculture and
marine environmental protection. Considering the challenges posed by low
contrast and low-light conditions in underwater environments, several
underwater image enhancement (UIE) methods have been proposed to improve the
quality of underwater images. However, only using the enhanced images does not
improve the performance of UOD, since it may unavoidably remove or alter
critical patterns and details of underwater objects. In contrast, we believe
that exploring the complementary information from the two domains is beneficial
for UOD. The raw image preserves the natural characteristics of the scene and
texture information of the objects, while the enhanced image improves the
visibility of underwater objects. Based on this perspective, we propose a Gated
Cross-domain Collaborative Network (GCC-Net) to address the challenges of poor
visibility and low contrast in underwater environments, which comprises three
dedicated components. Firstly, a real-time UIE method is employed to generate
enhanced images, which can improve the visibility of objects in low-contrast
areas. Secondly, a cross-domain feature interaction module is introduced to
facilitate the interaction and mine complementary information between raw and
enhanced image features. Thirdly, to prevent the contamination of unreliable
generated results, a gated feature fusion module is proposed to adaptively
control the fusion ratio of cross-domain information. Our method presents a new
UOD paradigm from the perspective of cross-domain information interaction and
fusion. Experimental results demonstrate that the proposed GCC-Net achieves
state-of-the-art performance on four underwater datasets.
Related papers
- Separated Attention: An Improved Cycle GAN Based Under Water Image Enhancement Method [0.0]
We have utilized the cycle consistent learning technique of the state-of-the-art Cycle GAN model with modification in the loss function.
We trained the Cycle GAN model with the modified loss functions on the benchmarked Enhancing Underwater Visual Perception dataset.
The upgraded images provide better results from conventional models and further for under water navigation, pose estimation, saliency prediction, object detection and tracking.
arXiv Detail & Related papers (2024-04-11T11:12:06Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Toward Sufficient Spatial-Frequency Interaction for Gradient-aware
Underwater Image Enhancement [5.553172974022233]
We develop a novel Underwater image enhancement (UIE) framework based on spatial-frequency interaction and gradient maps.
Experimental results on two real-world underwater image datasets show that our approach can successfully enhance underwater images.
arXiv Detail & Related papers (2023-09-08T02:58:17Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - Unpaired Overwater Image Defogging Using Prior Map Guided CycleGAN [60.257791714663725]
We propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes.
The proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
arXiv Detail & Related papers (2022-12-23T03:00:28Z) - Domain Adaptation for Underwater Image Enhancement via Content and Style
Separation [7.077978580799124]
Underwater image suffer from color cast, low contrast and hazy effect due to light absorption, refraction and scattering.
Recent learning-based methods demonstrate astonishing performance on underwater image enhancement.
We propose a domain adaptation framework for underwater image enhancement via content and style separation.
arXiv Detail & Related papers (2022-02-17T09:30:29Z) - Domain Adaptation for Underwater Image Enhancement [51.71570701102219]
We propose a novel Two-phase Underwater Domain Adaptation network (TUDA) to minimize the inter-domain and intra-domain gap.
In the first phase, a new dual-alignment network is designed, including a translation part for enhancing realism of input images, followed by an enhancement part.
In the second phase, we perform an easy-hard classification of real data according to the assessed quality of enhanced images, where a rank-based underwater quality assessment method is embedded.
arXiv Detail & Related papers (2021-08-22T06:38:19Z) - Domain Adaptive Adversarial Learning Based on Physics Model Feedback for
Underwater Image Enhancement [10.143025577499039]
We propose a new robust adversarial learning framework via physics model based feedback control and domain adaptation mechanism for enhancing underwater images.
A new method for simulating underwater-like training dataset from RGB-D data by underwater image formation model is proposed.
Final enhanced results on synthetic and real underwater images demonstrate the superiority of the proposed method.
arXiv Detail & Related papers (2020-02-20T07:50:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.