Robust Real-World Image Super-Resolution against Adversarial Attacks
- URL: http://arxiv.org/abs/2208.00428v1
- Date: Sun, 31 Jul 2022 13:26:33 GMT
- Title: Robust Real-World Image Super-Resolution against Adversarial Attacks
- Authors: Jiutao Yue and Haofeng Li and Pengxu Wei and Guanbin Li and Liang Lin
- Abstract summary: adversarial image samples with quasi-imperceptible noises could threaten deep learning SR models.
We propose a robust deep learning framework for real-world SR that randomly erases potential adversarial noises.
Our proposed method is more insensitive to adversarial attacks and presents more stable SR results than existing models and defenses.
- Score: 115.04009271192211
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently deep neural networks (DNNs) have achieved significant success in
real-world image super-resolution (SR). However, adversarial image samples with
quasi-imperceptible noises could threaten deep learning SR models. In this
paper, we propose a robust deep learning framework for real-world SR that
randomly erases potential adversarial noises in the frequency domain of input
images or features. The rationale is that on the SR task clean images or
features have a different pattern from the attacked ones in the frequency
domain. Observing that existing adversarial attacks usually add high-frequency
noises to input images, we introduce a novel random frequency mask module that
blocks out high-frequency components possibly containing the harmful
perturbations in a stochastic manner. Since the frequency masking may not only
destroys the adversarial perturbations but also affects the sharp details in a
clean image, we further develop an adversarial sample classifier based on the
frequency domain of images to determine if applying the proposed mask module.
Based on the above ideas, we devise a novel real-world image SR framework that
combines the proposed frequency mask modules and the proposed adversarial
classifier with an existing super-resolution backbone network. Experiments show
that our proposed method is more insensitive to adversarial attacks and
presents more stable SR results than existing models and defenses.
Related papers
- Low-Frequency Black-Box Backdoor Attack via Evolutionary Algorithm [12.711880028935315]
convolutional neural networks (CNNs) have achieved success in computer vision tasks, but are vulnerable to backdoor attacks.
We propose a robust low-frequency black-box backdoor attack (LFBA), which minimally perturbs low-frequency components of frequency spectrum.
Experiments on real-world datasets verify the effectiveness and robustness of LFBA against image processing operations and the state-of-the-art backdoor defenses.
arXiv Detail & Related papers (2024-02-23T23:36:36Z) - Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models! [52.0855711767075]
EvoSeed is an evolutionary strategy-based algorithmic framework for generating photo-realistic natural adversarial samples.
We employ CMA-ES to optimize the search for an initial seed vector, which, when processed by the Conditional Diffusion Model, results in the natural adversarial sample misclassified by the Model.
Experiments show that generated adversarial images are of high image quality, raising concerns about generating harmful content bypassing safety classifiers.
arXiv Detail & Related papers (2024-02-07T09:39:29Z) - Masked Frequency Modeling for Self-Supervised Visual Pre-Training [102.89756957704138]
We present Masked Frequency Modeling (MFM), a unified frequency-domain-based approach for self-supervised pre-training of visual models.
MFM first masks out a portion of frequency components of the input image and then predicts the missing frequencies on the frequency spectrum.
For the first time, MFM demonstrates that, for both ViT and CNN, a simple non-Siamese framework can learn meaningful representations even using none of the following: (i) extra data, (ii) extra model, (iii) mask token.
arXiv Detail & Related papers (2022-06-15T17:58:30Z) - Exploring Frequency Adversarial Attacks for Face Forgery Detection [59.10415109589605]
We propose a frequency adversarial attack method against face forgery detectors.
Inspired by the idea of meta-learning, we also propose a hybrid adversarial attack that performs attacks in both the spatial and frequency domains.
arXiv Detail & Related papers (2022-03-29T15:34:13Z) - Detecting Adversaries, yet Faltering to Noise? Leveraging Conditional
Variational AutoEncoders for Adversary Detection in the Presence of Noisy
Images [0.7734726150561086]
Conditional Variational AutoEncoders (CVAE) are surprisingly good at detecting imperceptible image perturbations.
We show how CVAEs can be effectively used to detect adversarial attacks on image classification networks.
arXiv Detail & Related papers (2021-11-28T20:36:27Z) - Unsupervised Single Image Super-resolution Under Complex Noise [60.566471567837574]
This paper proposes a model-based unsupervised SISR method to deal with the general SISR task with unknown degradations.
The proposed method can evidently surpass the current state of the art (SotA) method (about 1dB PSNR) not only with a slighter model (0.34M vs. 2.40M) but also faster speed.
arXiv Detail & Related papers (2021-07-02T11:55:40Z) - Frequency Consistent Adaptation for Real World Super Resolution [64.91914552787668]
We propose a novel Frequency Consistent Adaptation (FCA) that ensures the frequency domain consistency when applying Super-Resolution (SR) methods to the real scene.
We estimate degradation kernels from unsupervised images and generate the corresponding Low-Resolution (LR) images.
Based on the domain-consistent LR-HR pairs, we train easy-implemented Convolutional Neural Network (CNN) SR models.
arXiv Detail & Related papers (2020-12-18T08:25:39Z) - Adversarial Robustness Across Representation Spaces [35.58913661509278]
Adversa robustness corresponds to the susceptibility of deep neural networks to imperceptible perturbations made at test time.
In this work we extend the setting to consider the problem of training of deep neural networks that can be made simultaneously robust to perturbations applied in multiple natural representation spaces.
arXiv Detail & Related papers (2020-12-01T19:55:58Z) - TensorShield: Tensor-based Defense Against Adversarial Attacks on Images [7.080154188969453]
Recent studies have demonstrated that machine learning approaches like deep neural networks (DNNs) are easily fooled by adversarial attacks.
In this paper, we utilize tensor decomposition techniques as a preprocessing step to find a low-rank approximation of images which can significantly discard high-frequency perturbations.
arXiv Detail & Related papers (2020-02-18T00:39:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.