Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception
- URL: http://arxiv.org/abs/2309.01102v1
- Date: Sun, 3 Sep 2023 06:52:05 GMT
- Title: Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception
- Authors: Zengxi Zhang, Zhiying Jiang, Zeru Shi, Jinyuan Liu, Risheng Liu
- Abstract summary: In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
- Score: 54.672052775549
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the uneven scattering and absorption of different light wavelengths in
aquatic environments, underwater images suffer from low visibility and clear
color deviations. With the advancement of autonomous underwater vehicles,
extensive research has been conducted on learning-based underwater enhancement
algorithms. These works can generate visually pleasing enhanced images and
mitigate the adverse effects of degraded images on subsequent perception tasks.
However, learning-based methods are susceptible to the inherent fragility of
adversarial attacks, causing significant disruption in results. In this work,
we introduce a collaborative adversarial resilience network, dubbed CARNet, for
underwater image enhancement and subsequent detection tasks. Concretely, we
first introduce an invertible network with strong perturbation-perceptual
abilities to isolate attacks from underwater images, preventing interference
with image enhancement and perceptual tasks. Furthermore, we propose a
synchronized attack training strategy with both visual-driven and
perception-driven attacks enabling the network to discern and remove various
types of attacks. Additionally, we incorporate an attack pattern discriminator
to heighten the robustness of the network against different attacks. Extensive
experiments demonstrate that the proposed method outputs visually appealing
enhancement images and perform averagely 6.71% higher detection mAP than
state-of-the-art methods.
Related papers
- Unrevealed Threats: A Comprehensive Study of the Adversarial Robustness of Underwater Image Enhancement Models [16.272142350752805]
We make a first attempt to conduct adversarial attacks on five well-designed UWIE models on three common underwater image benchmark datasets.
Results show that five models exhibit varying degrees of vulnerability to adversarial attacks and well-designed small perturbations on degraded images are capable of preventing UWIE models from generating enhanced results.
arXiv Detail & Related papers (2024-09-10T11:02:35Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - On Adversarial Robustness of Deep Image Deblurring [15.66170693813815]
This paper introduces adversarial attacks against deep learning-based image deblurring methods.
We demonstrate that imperceptible distortion can significantly degrade the performance of state-of-the-art deblurring networks.
arXiv Detail & Related papers (2022-10-05T18:31:33Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep
Image-to-Image Models against Adversarial Attacks [104.8737334237993]
We present comprehensive investigations into the vulnerability of deep image-to-image models to adversarial attacks.
For five popular image-to-image tasks, 16 deep models are analyzed from various standpoints.
We show that unlike in image classification tasks, the performance degradation on image-to-image tasks can largely differ depending on various factors.
arXiv Detail & Related papers (2021-04-30T14:20:33Z) - Single Underwater Image Restoration by Contrastive Learning [23.500647404118638]
This paper elaborates on a novel method that achieves state-of-the-art results for underwater image restoration based on the unsupervised image-to-image translation framework.
We design our method by leveraging from contrastive learning and generative adversarial networks to maximize mutual information between raw and restored images.
arXiv Detail & Related papers (2021-03-17T14:47:03Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Perception Improvement for Free: Exploring Imperceptible Black-box
Adversarial Attacks on Image Classification [27.23874129994179]
White-box adversarial attacks can fool neural networks with small perturbations, especially for large size images.
Keeping successful adversarial perturbations imperceptible is especially challenging for transfer-based black-box adversarial attacks.
We propose structure-aware adversarial attacks by generating adversarial images based on psychological perceptual models.
arXiv Detail & Related papers (2020-10-30T07:17:12Z) - Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes [51.31334977346847]
We train networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction.
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly.
arXiv Detail & Related papers (2020-04-01T09:31:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.