Enhancing Adversarial Robustness via Test-time Transformation Ensembling
- URL: http://arxiv.org/abs/2107.14110v1
- Date: Thu, 29 Jul 2021 15:32:35 GMT
- Title: Enhancing Adversarial Robustness via Test-time Transformation Ensembling
- Authors: Juan C. P\'erez, Motasem Alfarra, Guillaume Jeanneret, Laura Rueda,
Ali Thabet, Bernard Ghanem, Pablo Arbel\'aez
- Abstract summary: We show how equipping models with Test-time Transformation Ensembling can work as a reliable defense against adversarial attacks.
We show that TTE consistently improves model robustness against a variety of powerful attacks without any need for re-training.
- Score: 51.51139269928358
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning models are prone to being fooled by imperceptible perturbations
known as adversarial attacks. In this work, we study how equipping models with
Test-time Transformation Ensembling (TTE) can work as a reliable defense
against such attacks. While transforming the input data, both at train and test
times, is known to enhance model performance, its effects on adversarial
robustness have not been studied. Here, we present a comprehensive empirical
study of the impact of TTE, in the form of widely-used image transforms, on
adversarial robustness. We show that TTE consistently improves model robustness
against a variety of powerful attacks without any need for re-training, and
that this improvement comes at virtually no trade-off with accuracy on clean
samples. Finally, we show that the benefits of TTE transfer even to the
certified robustness domain, in which TTE provides sizable and consistent
improvements.
Related papers
- The Effectiveness of Random Forgetting for Robust Generalization [21.163070161951868]
We introduce a novel learning paradigm called "Forget to Mitigate Overfitting" (FOMO)
FOMO alternates between the forgetting phase, which randomly forgets a subset of weights, and the relearning phase, which emphasizes learning generalizable features.
Our experiments show that FOMO alleviates robust overfitting by significantly reducing the gap between the best and last robust test accuracy.
arXiv Detail & Related papers (2024-02-18T23:14:40Z) - TEA: Test-time Energy Adaptation [67.4574269851666]
Test-time adaptation (TTA) aims to improve model generalizability when test data diverges from training distribution.
We propose a novel energy-based perspective, enhancing the model's perception of target data distributions.
arXiv Detail & Related papers (2023-11-24T10:49:49Z) - Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - Towards Deep Learning Models Resistant to Transfer-based Adversarial
Attacks via Data-centric Robust Learning [16.53553150596255]
Adversarial training (AT) is recognized as the strongest defense against white-box attacks.
We name this new defense paradigm Data-centric Robust Learning (DRL)
arXiv Detail & Related papers (2023-10-15T17:20:42Z) - Robust Feature Inference: A Test-time Defense Strategy using Spectral Projections [12.807619042576018]
We propose a novel test-time defense strategy called Robust Feature Inference (RFI)
RFI is easy to integrate with any existing (robust) training procedure without additional test-time computation.
We show that RFI improves robustness across adaptive and transfer attacks consistently.
arXiv Detail & Related papers (2023-07-21T16:18:58Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - A Light Recipe to Train Robust Vision Transformers [34.51642006926379]
We show that Vision Transformers (ViTs) can serve as an underlying architecture for improving the robustness of machine learning models against evasion attacks.
We achieve this objective using a custom adversarial training recipe, discovered using rigorous ablation studies on a subset of the ImageNet dataset.
We show that our recipe generalizes to different classes of ViT architectures and large-scale models on full ImageNet-1k.
arXiv Detail & Related papers (2022-09-15T16:00:04Z) - Deeper Insights into ViTs Robustness towards Common Corruptions [82.79764218627558]
We investigate how CNN-like architectural designs and CNN-based data augmentation strategies impact on ViTs' robustness towards common corruptions.
We demonstrate that overlapping patch embedding and convolutional Feed-Forward Network (FFN) boost performance on robustness.
We also introduce a novel conditional method enabling input-varied augmentations from two angles.
arXiv Detail & Related papers (2022-04-26T08:22:34Z) - Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests [73.32304304788838]
This paper systematically uncovers the failure mode of non-parametric TSTs through adversarial attacks.
To enable TST-agnostic attacks, we propose an ensemble attack framework that jointly minimizes the different types of test criteria.
To robustify TSTs, we propose a max-min optimization that iteratively generates adversarial pairs to train the deep kernels.
arXiv Detail & Related papers (2022-02-07T11:18:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.