Towards Adversarially Robust Deep Image Denoising
- URL: http://arxiv.org/abs/2201.04397v2
- Date: Thu, 13 Jan 2022 06:00:04 GMT
- Title: Towards Adversarially Robust Deep Image Denoising
- Authors: Hanshu Yan, Jingfeng Zhang, Jiashi Feng, Masashi Sugiyama, Vincent Y.
F. Tan
- Abstract summary: This work systematically investigates the adversarial robustness of deep image denoisers (DIDs)
We propose a novel adversarial attack, namely Observation-based Zero-mean Attack (sc ObsAtk) to craft adversarial zero-mean perturbations on given noisy images.
To robustify DIDs, we propose hybrid adversarial training (sc HAT) that jointly trains DIDs with adversarial and non-adversarial noisy data.
- Score: 199.2458715635285
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work systematically investigates the adversarial robustness of deep
image denoisers (DIDs), i.e, how well DIDs can recover the ground truth from
noisy observations degraded by adversarial perturbations. Firstly, to evaluate
DIDs' robustness, we propose a novel adversarial attack, namely
Observation-based Zero-mean Attack ({\sc ObsAtk}), to craft adversarial
zero-mean perturbations on given noisy images. We find that existing DIDs are
vulnerable to the adversarial noise generated by {\sc ObsAtk}. Secondly, to
robustify DIDs, we propose an adversarial training strategy, hybrid adversarial
training ({\sc HAT}), that jointly trains DIDs with adversarial and
non-adversarial noisy data to ensure that the reconstruction quality is high
and the denoisers around non-adversarial data are locally smooth. The resultant
DIDs can effectively remove various types of synthetic and adversarial noise.
We also uncover that the robustness of DIDs benefits their generalization
capability on unseen real-world noise. Indeed, {\sc HAT}-trained DIDs can
recover high-quality clean images from real-world noise even without training
on real noisy data. Extensive experiments on benchmark datasets, including
Set68, PolyU, and SIDD, corroborate the effectiveness of {\sc ObsAtk} and {\sc
HAT}.
Related papers
- Stable Unlearnable Example: Enhancing the Robustness of Unlearnable
Examples via Stable Error-Minimizing Noise [31.586389548657205]
Unlearnable example is proposed to significantly degrade the generalization performance of models by adding a kind of imperceptible noise to the data.
We introduce stable error-minimizing noise (SEM), which trains the defensive noise against random perturbation instead of the time-consuming adversarial perturbation.
SEM achieves a new state-of-the-art performance on CIFAR-10, CIFAR-100, and ImageNet Subset.
arXiv Detail & Related papers (2023-11-22T01:43:57Z) - Evaluating Similitude and Robustness of Deep Image Denoising Models via
Adversarial Attack [60.40356882897116]
Deep neural networks (DNNs) have shown superior performance compared to traditional image denoising algorithms.
In this paper, we propose an adversarial attack method named denoising-PGD which can successfully attack all the current deep denoising models.
arXiv Detail & Related papers (2023-06-28T09:30:59Z) - Exploring Efficient Asymmetric Blind-Spots for Self-Supervised Denoising in Real-World Scenarios [44.31657750561106]
Noise in real-world scenarios is often spatially correlated, which causes many self-supervised algorithms to perform poorly.
We propose Asymmetric Tunable Blind-Spot Network (AT-BSN), where the blind-spot size can be freely adjusted.
We show that our method achieves state-of-the-art, and is superior to other self-supervised algorithms in terms of computational overhead and visual effects.
arXiv Detail & Related papers (2023-03-29T15:19:01Z) - I2V: Towards Texture-Aware Self-Supervised Blind Denoising using
Self-Residual Learning for Real-World Images [8.763680382529412]
pixel-shuffle downsampling (PD) has been proposed to eliminate the spatial correlation of noise.
We propose self-residual learning without the PD process to maintain texture information.
The results of extensive experiments show that the proposed method outperforms state-of-the-art self-supervised blind denoising approaches.
arXiv Detail & Related papers (2023-02-21T08:51:17Z) - Confidence-based Reliable Learning under Dual Noises [46.45663546457154]
Deep neural networks (DNNs) have achieved remarkable success in a variety of computer vision tasks.
Yet, the data collected from the open world are unavoidably polluted by noise, which may significantly undermine the efficacy of the learned models.
Various attempts have been made to reliably train DNNs under data noise, but they separately account for either the noise existing in the labels or that existing in the images.
This work provides a first, unified framework for reliable learning under the joint (image, label)-noise.
arXiv Detail & Related papers (2023-02-10T07:50:34Z) - Beyond Pretrained Features: Noisy Image Modeling Provides Adversarial
Defense [52.66971714830943]
Masked image modeling (MIM) has made it a prevailing framework for self-supervised visual representation learning.
In this paper, we investigate how this powerful self-supervised learning paradigm can provide adversarial robustness to downstream classifiers.
We propose an adversarial defense method, referred to as De3, by exploiting the pretrained decoder for denoising.
arXiv Detail & Related papers (2023-02-02T12:37:24Z) - Robust Deep Ensemble Method for Real-world Image Denoising [62.099271330458066]
We propose a simple yet effective Bayesian deep ensemble (BDE) method for real-world image denoising.
Our BDE achieves +0.28dB PSNR gain over the state-of-the-art denoising method.
Our BDE can be extended to other image restoration tasks, and achieves +0.30dB, +0.18dB and +0.12dB PSNR gains on benchmark datasets.
arXiv Detail & Related papers (2022-06-08T06:19:30Z) - Guided Diffusion Model for Adversarial Purification [103.4596751105955]
Adversarial attacks disturb deep neural networks (DNNs) in various algorithms and frameworks.
We propose a novel purification approach, referred to as guided diffusion model for purification (GDMP)
On our comprehensive experiments across various datasets, the proposed GDMP is shown to reduce the perturbations raised by adversarial attacks to a shallow range.
arXiv Detail & Related papers (2022-05-30T10:11:15Z) - Adaptive noise imitation for image denoising [58.21456707617451]
We develop a new textbfadaptive noise imitation (ADANI) algorithm that can synthesize noisy data from naturally noisy images.
To produce realistic noise, a noise generator takes unpaired noisy/clean images as input, where the noisy image is a guide for noise generation.
Coupling the noisy data output from ADANI with the corresponding ground-truth, a denoising CNN is then trained in a fully-supervised manner.
arXiv Detail & Related papers (2020-11-30T02:49:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.