Improving Robustness with Image Filtering
- URL: http://arxiv.org/abs/2112.11235v1
- Date: Tue, 21 Dec 2021 14:04:25 GMT
- Title: Improving Robustness with Image Filtering
- Authors: Matteo Terzi, Mattia Carletti, Gian Antonio Susto
- Abstract summary: This paper introduces a new image filtering scheme called Image-Graph Extractor (IGE) that extracts the fundamental nodes of an image and their connections through a graph structure.
By leveraging the IGE representation, we build a new defense method, Filtering As a Defense, that does not allow the attacker to entangle pixels to create malicious patterns.
We show that data augmentation with filtered images effectively improves the model's robustness to data corruption.
- Score: 3.169089186688223
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial robustness is one of the most challenging problems in Deep
Learning and Computer Vision research. All the state-of-the-art techniques
require a time-consuming procedure that creates cleverly perturbed images. Due
to its cost, many solutions have been proposed to avoid Adversarial Training.
However, all these attempts proved ineffective as the attacker manages to
exploit spurious correlations among pixels to trigger brittle features
implicitly learned by the model. This paper first introduces a new image
filtering scheme called Image-Graph Extractor (IGE) that extracts the
fundamental nodes of an image and their connections through a graph structure.
By leveraging the IGE representation, we build a new defense method, Filtering
As a Defense, that does not allow the attacker to entangle pixels to create
malicious patterns. Moreover, we show that data augmentation with filtered
images effectively improves the model's robustness to data corruption. We
validate our techniques on CIFAR-10, CIFAR-100, and ImageNet.
Related papers
- Zero-Shot Detection of AI-Generated Images [54.01282123570917]
We propose a zero-shot entropy-based detector (ZED) to detect AI-generated images.
Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images.
ZED achieves an average improvement of more than 3% over the SoTA in terms of accuracy.
arXiv Detail & Related papers (2024-09-24T08:46:13Z) - Class-Conditioned Transformation for Enhanced Robust Image Classification [19.738635819545554]
We propose a novel test-time threat model algorithm that enhances Adrial-versa-Trained (AT) models.
Our method operates through COnditional image transformation and DIstance-based Prediction (CODIP)
The proposed method achieves state-of-the-art results demonstrated through extensive experiments on various models, AT methods, datasets, and attack types.
arXiv Detail & Related papers (2023-03-27T17:28:20Z) - Masked Autoencoders are Robust Data Augmentors [90.34825840657774]
Regularization techniques like image augmentation are necessary for deep neural networks to generalize well.
We propose a novel perspective of augmentation to regularize the training process.
We show that utilizing such model-based nonlinear transformation as data augmentation can improve high-level recognition tasks.
arXiv Detail & Related papers (2022-06-10T02:41:48Z) - Get Fooled for the Right Reason: Improving Adversarial Robustness
through a Teacher-guided Curriculum Learning Approach [17.654350836042813]
Current SOTA adversarially robust models are mostly based on adversarial training (AT) and differ only by some regularizers either at inner or outer minimization steps.
We propose a non-iterative method that enforces the following ideas during training.
Our method achieves significant performance gains with a little extra effort (10-20%) over existing AT models.
arXiv Detail & Related papers (2021-10-30T17:47:14Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Image Restoration by Deep Projected GSURE [115.57142046076164]
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution.
We propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN.
arXiv Detail & Related papers (2021-02-04T08:52:46Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp
Adversarial Attacks [154.31827097264264]
Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms.
We propose Dual Manifold Adversarial Training (DMAT) where adversarial perturbations in both latent and image spaces are used in robustifying the model.
Our DMAT improves performance on normal images, and achieves comparable robustness to the standard adversarial training against Lp attacks.
arXiv Detail & Related papers (2020-09-05T06:00:28Z) - Image Augmentation Is All You Need: Regularizing Deep Reinforcement
Learning from Pixels [37.726433732939114]
We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms.
We leverage input perturbations commonly used in computer vision tasks to regularize the value function.
Our approach can be combined with any model-free reinforcement learning algorithm, requiring only minor modifications.
arXiv Detail & Related papers (2020-04-28T16:48:16Z) - Encoding Power Traces as Images for Efficient Side-Channel Analysis [0.0]
Side-Channel Attacks (SCAs) are a powerful method to attack implementations of cryptographic algorithms.
Deep Learning (DL) methods have been introduced to simplify SCAs and simultaneously lowering the amount of required side-channel traces for a successful attack.
We present a novel technique to interpret 1D traces as 2D images.
arXiv Detail & Related papers (2020-04-23T08:00:37Z) - Applying Tensor Decomposition to image for Robustness against
Adversarial Attack [3.347059384111439]
It can easily fool the deep learning model by adding small perturbations.
In this paper, we suggest combining tensor decomposition for defending the model against adversarial example.
arXiv Detail & Related papers (2020-02-28T18:30:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.