Amicable Aid: Perturbing Images to Improve Classification Performance
- URL: http://arxiv.org/abs/2112.04720v4
- Date: Thu, 14 Dec 2023 12:32:59 GMT
- Title: Amicable Aid: Perturbing Images to Improve Classification Performance
- Authors: Juyeop Kim, Jun-Ho Choi, Soobeom Jang, Jong-Seok Lee
- Abstract summary: adversarial perturbation of images to attack deep image classification models pose serious security concerns in practice.
We show that by taking the opposite search direction of perturbation, an image can be modified to yield higher classification confidence.
We investigate the universal amicable aid, i.e., a fixed perturbation can be applied to multiple images to improve their classification results.
- Score: 20.9291591835171
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While adversarial perturbation of images to attack deep image classification
models pose serious security concerns in practice, this paper suggests a novel
paradigm where the concept of image perturbation can benefit classification
performance, which we call amicable aid. We show that by taking the opposite
search direction of perturbation, an image can be modified to yield higher
classification confidence and even a misclassified image can be made correctly
classified. This can be also achieved with a large amount of perturbation by
which the image is made unrecognizable by human eyes. The mechanism of the
amicable aid is explained in the viewpoint of the underlying natural image
manifold. Furthermore, we investigate the universal amicable aid, i.e., a fixed
perturbation can be applied to multiple images to improve their classification
results. While it is challenging to find such perturbations, we show that
making the decision boundary as perpendicular to the image manifold as possible
via training with modified data is effective to obtain a model for which
universal amicable perturbations are more easily found.
Related papers
- Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Exploring the Robustness of Human Parsers Towards Common Corruptions [99.89886010550836]
We construct three corruption robustness benchmarks, termed LIP-C, ATR-C, and Pascal-Person-Part-C, to assist us in evaluating the risk tolerance of human parsing models.
Inspired by the data augmentation strategy, we propose a novel heterogeneous augmentation-enhanced mechanism to bolster robustness under commonly corrupted conditions.
arXiv Detail & Related papers (2023-09-02T13:32:14Z) - All-in-one Multi-degradation Image Restoration Network via Hierarchical
Degradation Representation [47.00239809958627]
We propose a novel All-in-one Multi-degradation Image Restoration Network (AMIRNet)
AMIRNet learns a degradation representation for unknown degraded images by progressively constructing a tree structure through clustering.
This tree-structured representation explicitly reflects the consistency and discrepancy of various distortions, providing a specific clue for image restoration.
arXiv Detail & Related papers (2023-08-06T04:51:41Z) - Exploiting Frequency Spectrum of Adversarial Images for General
Robustness [3.480626767752489]
Adversarial training with an emphasis on phase components significantly improves model performance on clean, adversarial, and common corruption accuracies.
We propose a frequency-based data augmentation method, Adversarial Amplitude Swap, that swaps the amplitude spectrum between clean and adversarial images.
These images act as substitutes for adversarial images and can be implemented in various adversarial training setups.
arXiv Detail & Related papers (2023-05-15T08:36:32Z) - Image Deblurring by Exploring In-depth Properties of Transformer [86.7039249037193]
We leverage deep features extracted from a pretrained vision transformer (ViT) to encourage recovered images to be sharp without sacrificing the performance measured by the quantitative metrics.
By comparing the transformer features between recovered image and target one, the pretrained transformer provides high-resolution blur-sensitive semantic information.
One regards the features as vectors and computes the discrepancy between representations extracted from recovered image and target one in Euclidean space.
arXiv Detail & Related papers (2023-03-24T14:14:25Z) - ExCon: Explanation-driven Supervised Contrastive Learning for Image
Classification [12.109442912963969]
We propose to leverage saliency-based explanation methods to create content-preserving masked augmentations for contrastive learning.
Our novel explanation-driven supervised contrastive learning (ExCon) methodology critically serves the dual goals of encouraging nearby image embeddings to have similar content and explanation.
We demonstrate that ExCon outperforms vanilla supervised contrastive learning in terms of classification, explanation quality, adversarial robustness as well as calibration of probabilistic predictions of the model in the context of distributional shift.
arXiv Detail & Related papers (2021-11-28T23:15:26Z) - Contrastive Counterfactual Visual Explanations With Overdetermination [7.8752926274677435]
CLEAR Image is based on the view that a satisfactory explanation should be contrastive, counterfactual and measurable.
CLEAR Image was successfully applied to a medical imaging case study where it outperformed methods such as Grad-CAM and LIME by an average of 27%.
arXiv Detail & Related papers (2021-06-28T10:24:17Z) - Few-Shot Learning with Part Discovery and Augmentation from Unlabeled
Images [79.34600869202373]
We show that inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes.
Specifically, we propose a novel part-based self-supervised representation learning scheme to learn transferable representations.
Our method yields impressive results, outperforming the previous best unsupervised methods by 7.74% and 9.24%.
arXiv Detail & Related papers (2021-05-25T12:22:11Z) - Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep
Image-to-Image Models against Adversarial Attacks [104.8737334237993]
We present comprehensive investigations into the vulnerability of deep image-to-image models to adversarial attacks.
For five popular image-to-image tasks, 16 deep models are analyzed from various standpoints.
We show that unlike in image classification tasks, the performance degradation on image-to-image tasks can largely differ depending on various factors.
arXiv Detail & Related papers (2021-04-30T14:20:33Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Unsupervised Deep Metric Learning with Transformed Attention Consistency
and Contrastive Clustering Loss [28.17607283348278]
Existing approaches for unsupervised metric learning focus on exploring self-supervision information within the input image itself.
We observe that, when analyzing images, human eyes often compare images against each other instead of examining images individually.
We develop a new approach to unsupervised deep metric learning where the network is learned based on self-supervision information across images.
arXiv Detail & Related papers (2020-08-10T19:33:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.