On Adversarial Robustness of Deep Image Deblurring
- URL: http://arxiv.org/abs/2210.02502v1
- Date: Wed, 5 Oct 2022 18:31:33 GMT
- Title: On Adversarial Robustness of Deep Image Deblurring
- Authors: Kanchana Vaishnavi Gandikota, Paramanand Chandramouli, Michael Moeller
- Abstract summary: This paper introduces adversarial attacks against deep learning-based image deblurring methods.
We demonstrate that imperceptible distortion can significantly degrade the performance of state-of-the-art deblurring networks.
- Score: 15.66170693813815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent approaches employ deep learning-based solutions for the recovery of a
sharp image from its blurry observation. This paper introduces adversarial
attacks against deep learning-based image deblurring methods and evaluates the
robustness of these neural networks to untargeted and targeted attacks. We
demonstrate that imperceptible distortion can significantly degrade the
performance of state-of-the-art deblurring networks, even producing drastically
different content in the output, indicating the strong need to include
adversarially robust training not only in classification but also for image
recovery.
Related papers
- Revisiting Min-Max Optimization Problem in Adversarial Training [0.0]
Computer vision applications in the real world puts the security of the deep neural networks at risk.
Recent works demonstrate that convolutional neural networks are susceptible to adversarial examples.
We propose a new method to build robust deep neural networks against adversarial attacks.
arXiv Detail & Related papers (2024-08-20T22:31:19Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Training on Foveated Images Improves Robustness to Adversarial Attacks [26.472800216546233]
Deep neural networks (DNNs) have been shown to be vulnerable to adversarial attacks.
RBlur is an image transform that simulates the loss in fidelity of peripheral vision by blurring the image and reducing its color saturation.
DNNs trained on images transformed by RBlur are substantially more robust to adversarial attacks, as well as other, non-adversarial, corruptions, achieving up to 25% higher accuracy on perturbed data.
arXiv Detail & Related papers (2023-08-01T21:40:30Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Deep Bayesian Image Set Classification: A Defence Approach against
Adversarial Attacks [32.48820298978333]
Deep neural networks (DNNs) are susceptible to be fooled with nearly high confidence by an adversary.
In practice, the vulnerability of deep learning systems against carefully perturbed images, known as adversarial examples, poses a dire security threat in the physical world applications.
We propose a robust deep Bayesian image set classification as a defence framework against a broad range of adversarial attacks.
arXiv Detail & Related papers (2021-08-23T14:52:44Z) - Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep
Image-to-Image Models against Adversarial Attacks [104.8737334237993]
We present comprehensive investigations into the vulnerability of deep image-to-image models to adversarial attacks.
For five popular image-to-image tasks, 16 deep models are analyzed from various standpoints.
We show that unlike in image classification tasks, the performance degradation on image-to-image tasks can largely differ depending on various factors.
arXiv Detail & Related papers (2021-04-30T14:20:33Z) - Face Anti-Spoofing Via Disentangled Representation Learning [90.90512800361742]
Face anti-spoofing is crucial to security of face recognition systems.
We propose a novel perspective of face anti-spoofing that disentangles the liveness features and content features from images.
arXiv Detail & Related papers (2020-08-19T03:54:23Z) - Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes [51.31334977346847]
We train networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction.
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly.
arXiv Detail & Related papers (2020-04-01T09:31:10Z) - Single image reflection removal via learning with multi-image
constraints [50.54095311597466]
We propose a novel learning-based solution that combines the advantages of the aforementioned approaches and overcomes their drawbacks.
Our algorithm works by learning a deep neural network to optimize the target with joint constraints enhanced among multiple input images.
Our algorithm runs in real-time and state-of-the-art reflection removal performance on real images.
arXiv Detail & Related papers (2019-12-08T06:10:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.