PRIME: A Few Primitives Can Boost Robustness to Common Corruptions
- URL: http://arxiv.org/abs/2112.13547v1
- Date: Mon, 27 Dec 2021 07:17:51 GMT
- Title: PRIME: A Few Primitives Can Boost Robustness to Common Corruptions
- Authors: Apostolos Modas, Rahul Rade, Guillermo Ortiz-Jim\'enez, Seyed-Mohsen
Moosavi-Dezfooli, Pascal Frossard
- Abstract summary: deep networks have a hard time generalizing to many common corruptions of their data.
We propose PRIME, a general data augmentation scheme that consists of simple families of max-entropy image transformations.
We show that PRIME outperforms the prior art for corruption robustness, while its simplicity and plug-and-play nature enables it to be combined with other methods to further boost their robustness.
- Score: 60.119023683371736
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite their impressive performance on image classification tasks, deep
networks have a hard time generalizing to many common corruptions of their
data. To fix this vulnerability, prior works have mostly focused on increasing
the complexity of their training pipelines, combining multiple methods, in the
name of diversity. However, in this work, we take a step back and follow a
principled approach to achieve robustness to common corruptions. We propose
PRIME, a general data augmentation scheme that consists of simple families of
max-entropy image transformations. We show that PRIME outperforms the prior art
for corruption robustness, while its simplicity and plug-and-play nature
enables it to be combined with other methods to further boost their robustness.
Furthermore, we analyze PRIME to shed light on the importance of the mixing
strategy on synthesizing corrupted images, and to reveal the
robustness-accuracy trade-offs arising in the context of common corruptions.
Finally, we show that the computational efficiency of our method allows it to
be easily used in both on-line and off-line data augmentation schemes.
Related papers
- Robust Classification by Coupling Data Mollification with Label Smoothing [25.66357344079206]
We propose a novel approach of coupling data mollification, in the form of image noising and blurring, with label smoothing to align predicted label confidences with image degradation.
We demonstrate improved robustness and uncertainty on the corrupted image benchmarks of the CIFAR and TinyImageNet datasets.
arXiv Detail & Related papers (2024-06-03T16:21:29Z) - Quantified Task Misalignment to Inform PEFT: An Exploration of Domain
Generalization and Catastrophic Forgetting in CLIP [7.550566004119157]
We analyze the relation between task difficulty in the CLIP model and the performance of several simple parameter-efficient fine-tuning methods.
A method that trains only a subset of attention weights, which we call A-CLIP, yields a balance between domain generalization and catastrophic forgetting.
arXiv Detail & Related papers (2024-02-14T23:01:03Z) - Enhancing Infrared Small Target Detection Robustness with Bi-Level
Adversarial Framework [61.34862133870934]
We propose a bi-level adversarial framework to promote the robustness of detection in the presence of distinct corruptions.
Our scheme remarkably improves 21.96% IOU across a wide array of corruptions and notably promotes 4.97% IOU on the general benchmark.
arXiv Detail & Related papers (2023-09-03T06:35:07Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - Building Robust Ensembles via Margin Boosting [98.56381714748096]
In adversarial robustness, a single model does not usually have enough power to defend against all possible adversarial attacks.
We develop an algorithm for learning an ensemble with maximum margin.
We show that our algorithm not only outperforms existing ensembling techniques, but also large models trained in an end-to-end fashion.
arXiv Detail & Related papers (2022-06-07T14:55:58Z) - VITA: A Multi-Source Vicinal Transfer Augmentation Method for
Out-of-Distribution Generalization [107.96139593283547]
We propose a multi-source vicinal transfer augmentation (VITA) method for generating diverse on-manifold samples.
The proposed VITA consists of two complementary parts: tangent transfer and integration of multi-source vicinal samples.
Our proposed VITA significantly outperforms the current state-of-the-art augmentation methods, demonstrated in extensive experiments on corruption benchmarks.
arXiv Detail & Related papers (2022-04-25T09:47:51Z) - Dynamic Feature Regularized Loss for Weakly Supervised Semantic
Segmentation [37.43674181562307]
We propose a new regularized loss which utilizes both shallow and deep features that are dynamically updated.
Our approach achieves new state-of-the-art performances, outperforming other approaches by a significant margin with more than 6% mIoU increase.
arXiv Detail & Related papers (2021-08-03T05:11:00Z) - A simple way to make neural networks robust against diverse image
corruptions [29.225922892332342]
We show that a simple but properly tuned training with additive Gaussian and Speckle noise generalizes surprisingly well to unseen corruptions.
An adversarial training of the recognition model against uncorrelated worst-case noise leads to an additional increase in performance.
arXiv Detail & Related papers (2020-01-16T20:10:25Z) - Corruption-robust exploration in episodic reinforcement learning [76.19192549843727]
We study multi-stage episodic reinforcement learning under adversarial corruptions in both the rewards and the transition probabilities of the underlying system.
Our framework yields efficient algorithms which attain near-optimal regret in the absence of corruptions.
Notably, our work provides the first sublinear regret guarantee which any deviation from purely i.i.d. transitions in the bandit-feedback model for episodic reinforcement learning.
arXiv Detail & Related papers (2019-11-20T03:49:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.