APGNet: Adaptive Prior-Guided for Underwater Camouflaged Object Detection
- URL: http://arxiv.org/abs/2510.12056v1
- Date: Tue, 14 Oct 2025 01:51:44 GMT
- Title: APGNet: Adaptive Prior-Guided for Underwater Camouflaged Object Detection
- Authors: Xinxin Huang, Han Sun, Junmin Cai, Ningzhong Liu, Huiyu Zhou,
- Abstract summary: We propose an Adaptive Prior-Guided Network (APGNet) to detect camouflaged objects in underwater environments.<n>APGNet integrates a Siamese architecture with a novel prior-guided mechanism to enhance robustness and detection accuracy.<n>Our proposed method APGNet outperforms 15 state-of-art methods under widely used evaluation metrics.
- Score: 22.097955383220143
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting camouflaged objects in underwater environments is crucial for marine ecological research and resource exploration. However, existing methods face two key challenges: underwater image degradation, including low contrast and color distortion, and the natural camouflage of marine organisms. Traditional image enhancement techniques struggle to restore critical features in degraded images, while camouflaged object detection (COD) methods developed for terrestrial scenes often fail to adapt to underwater environments due to the lack of consideration for underwater optical characteristics. To address these issues, we propose APGNet, an Adaptive Prior-Guided Network, which integrates a Siamese architecture with a novel prior-guided mechanism to enhance robustness and detection accuracy. First, we employ the Multi-Scale Retinex with Color Restoration (MSRCR) algorithm for data augmentation, generating illumination-invariant images to mitigate degradation effects. Second, we design an Extended Receptive Field (ERF) module combined with a Multi-Scale Progressive Decoder (MPD) to capture multi-scale contextual information and refine feature representations. Furthermore, we propose an adaptive prior-guided mechanism that hierarchically fuses position and boundary priors by embedding spatial attention in high-level features for coarse localization and using deformable convolution to refine contours in low-level features. Extensive experimental results on two public MAS datasets demonstrate that our proposed method APGNet outperforms 15 state-of-art methods under widely used evaluation metrics.
Related papers
- High-Resolution Underwater Camouflaged Object Detection: GBU-UCOD Dataset and Topology-Aware and Frequency-Decoupled Networks [32.76569239634241]
We propose a novel framework that integrates topology-aware modeling with frequency-decoupled perception.<n>DeepTopo-Net achieves state-of-the-art performance, particularly in preserving morphological integrity of complex underwater patterns.
arXiv Detail & Related papers (2026-02-03T14:41:27Z) - DACA-Net: A Degradation-Aware Conditional Diffusion Network for Underwater Image Enhancement [16.719513778795367]
Underwater images typically suffer from severe colour distortions, low visibility, and reduced structural clarity due to complex optical effects such as scattering and absorption.<n>Existing enhancement methods often struggle to adaptively handle diverse degradation conditions and fail to leverage underwater-specific physical priors effectively.<n>We propose a degradation-aware conditional diffusion model to enhance underwater images adaptively and robustly.
arXiv Detail & Related papers (2025-07-30T09:16:07Z) - RUSplatting: Robust 3D Gaussian Splatting for Sparse-View Underwater Scene Reconstruction [9.070464075411472]
This paper presents an enhanced Gaussian Splatting-based framework that improves both the visual quality and accuracy of deep underwater rendering.<n>We propose decoupled learning for RGB channels, guided by the physics of underwater attenuation, to enable more accurate colour restoration.<n>We release a newly collected dataset, Submerged3D, captured specifically in deep-sea environments.
arXiv Detail & Related papers (2025-05-21T16:42:15Z) - Advanced Underwater Image Quality Enhancement via Hybrid Super-Resolution Convolutional Neural Networks and Multi-Scale Retinex-Based Defogging Techniques [0.0]
The research conducts extensive experiments on real-world underwater datasets to further illustrate the efficacy of the suggested approach.
In real-time underwater applications like marine exploration, underwater robotics, and autonomous underwater vehicles, the combination of deep learning and conventional image processing techniques offers a computationally efficient framework with superior results.
arXiv Detail & Related papers (2024-10-18T08:40:26Z) - UW-SDF: Exploiting Hybrid Geometric Priors for Neural SDF Reconstruction from Underwater Multi-view Monocular Images [63.32490897641344]
We propose a framework for reconstructing target objects from multi-view underwater images based on neural SDF.
We introduce hybrid geometric priors to optimize the reconstruction process, markedly enhancing the quality and efficiency of neural SDF reconstruction.
arXiv Detail & Related papers (2024-10-10T16:33:56Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - A Gated Cross-domain Collaborative Network for Underwater Object
Detection [14.715181402435066]
Underwater object detection plays a significant role in aquaculture and marine environmental protection.
Several underwater image enhancement (UIE) methods have been proposed to improve the quality of underwater images.
We propose a Gated Cross-domain Collaborative Network (GCC-Net) to address the challenges of poor visibility and low contrast in underwater environments.
arXiv Detail & Related papers (2023-06-25T06:28:28Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - Unpaired Overwater Image Defogging Using Prior Map Guided CycleGAN [60.257791714663725]
We propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes.
The proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
arXiv Detail & Related papers (2022-12-23T03:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.