AdvFilter: Predictive Perturbation-aware Filtering against Adversarial
Attack via Multi-domain Learning
- URL: http://arxiv.org/abs/2107.06501v1
- Date: Wed, 14 Jul 2021 06:08:48 GMT
- Title: AdvFilter: Predictive Perturbation-aware Filtering against Adversarial
Attack via Multi-domain Learning
- Authors: Yihao Huang and Qing Guo and Felix Juefei-Xu and Lei Ma and Weikai
Miao and Yang Liu and Geguang Pu
- Abstract summary: We propose predictive perturbation-aware pixel-wise filtering, where dual-perturbation filtering and an uncertainty-aware fusion module are employed.
We show the advantages over enhancing CNNs' robustness, high generalization to different models, and noise levels.
- Score: 17.95784884411471
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-level representation-guided pixel denoising and adversarial training are
independent solutions to enhance the robustness of CNNs against adversarial
attacks by pre-processing input data and re-training models, respectively. Most
recently, adversarial training techniques have been widely studied and improved
while the pixel denoising-based method is getting less attractive. However, it
is still questionable whether there exists a more advanced pixel
denoising-based method and whether the combination of the two solutions
benefits each other. To this end, we first comprehensively investigate two
kinds of pixel denoising methods for adversarial robustness enhancement (i.e.,
existing additive-based and unexplored filtering-based methods) under the loss
functions of image-level and semantic-level restorations, respectively, showing
that pixel-wise filtering can obtain much higher image quality (e.g., higher
PSNR) as well as higher robustness (e.g., higher accuracy on adversarial
examples) than existing pixel-wise additive-based method. However, we also
observe that the robustness results of the filtering-based method rely on the
perturbation amplitude of adversarial examples used for training. To address
this problem, we propose predictive perturbation-aware pixel-wise filtering,
where dual-perturbation filtering and an uncertainty-aware fusion module are
designed and employed to automatically perceive the perturbation amplitude
during the training and testing process. The proposed method is termed as
AdvFilter. Moreover, we combine adversarial pixel denoising methods with three
adversarial training-based methods, hinting that considering data and models
jointly is able to achieve more robust CNNs. The experiments conduct on
NeurIPS-2017DEV, SVHN, and CIFAR10 datasets and show the advantages over
enhancing CNNs' robustness, high generalization to different models, and noise
levels.
Related papers
- PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - Masked Image Training for Generalizable Deep Image Denoising [53.03126421917465]
We present a novel approach to enhance the generalization performance of denoising networks.
Our method involves masking random pixels of the input image and reconstructing the missing information during training.
Our approach exhibits better generalization ability than other deep learning models and is directly applicable to real-world scenarios.
arXiv Detail & Related papers (2023-03-23T09:33:44Z) - Linear Combinations of Patches are Unreasonably Effective for Single-Image Denoising [5.893124686141782]
Deep neural networks have revolutionized image denoising in achieving significant accuracy improvements.
To alleviate the requirement to learn image priors externally, single-image methods perform denoising solely based on the analysis of the input noisy image.
This work investigates the effectiveness of linear combinations of patches for denoising under this constraint.
arXiv Detail & Related papers (2022-12-01T10:52:03Z) - DCCF: Deep Comprehensible Color Filter Learning Framework for
High-Resolution Image Harmonization [14.062386668676533]
We propose a novel Deep Comprehensible Color Filter (DCCF) learning framework for high-resolution image harmonization.
DCCF learns four human comprehensible neural filters (i.e. hue, saturation, value and attentive rendering filters) in an end-to-end manner.
It outperforms state-of-the-art post-processing method on iHarmony4 dataset on images' full-resolutions by achieving 7.63% and 1.69% relative improvements on MSE and PSNR respectively.
arXiv Detail & Related papers (2022-07-11T11:42:10Z) - Diffusion Models for Adversarial Purification [69.1882221038846]
Adrial purification refers to a class of defense methods that remove adversarial perturbations using a generative model.
We propose DiffPure that uses diffusion models for adversarial purification.
Our method achieves the state-of-the-art results, outperforming current adversarial training and adversarial purification methods.
arXiv Detail & Related papers (2022-05-16T06:03:00Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Beyond Joint Demosaicking and Denoising: An Image Processing Pipeline
for a Pixel-bin Image Sensor [0.883717274344425]
Pixel binning is considered one of the most prominent solutions to tackle the hardware limitation of smartphone cameras.
In this paper, we tackle the challenges of joint demosaicing and denoising (JDD) on such an image sensor by introducing a novel learning-based method.
The proposed method is guided by a multi-term objective function, including two novel perceptual losses to produce visually plausible images.
arXiv Detail & Related papers (2021-04-19T15:41:28Z) - Learning Spatial and Spatio-Temporal Pixel Aggregations for Image and
Video Denoising [104.59305271099967]
We present a pixel aggregation network and learn the pixel sampling and averaging strategies for image denoising.
We develop a pixel aggregation network for video denoising to sample pixels across the spatial-temporal space.
Our method is able to solve the misalignment issues caused by large motion in dynamic scenes.
arXiv Detail & Related papers (2021-01-26T13:00:46Z) - Depth image denoising using nuclear norm and learning graph model [107.51199787840066]
Group-based image restoration methods are more effective in gathering the similarity among patches.
For each patch, we find and group the most similar patches within a searching window.
The proposed method is superior to other current state-of-the-art denoising methods in both subjective and objective criterion.
arXiv Detail & Related papers (2020-08-09T15:12:16Z) - Noise2Inpaint: Learning Referenceless Denoising by Inpainting Unrolling [2.578242050187029]
We introduce Noise2Inpaint (N2I), a training approach that recasts the denoising problem into a regularized image inpainting framework.
N2I performs successful denoising on real-world datasets, while better preserving details compared to its purely data-driven counterpart Noise2Self.
arXiv Detail & Related papers (2020-06-16T18:46:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.