Uncovering Adversarial Risks of Test-Time Adaptation
- URL: http://arxiv.org/abs/2301.12576v1
- Date: Sun, 29 Jan 2023 22:58:05 GMT
- Title: Uncovering Adversarial Risks of Test-Time Adaptation
- Authors: Tong Wu, Feiran Jia, Xiangyu Qi, Jiachen T. Wang, Vikash Sehwag, Saeed
Mahloujifar, Prateek Mittal
- Abstract summary: Test-time adaptation (TTA) has been proposed as a promising solution for addressing distribution shifts.
We uncover a novel security vulnerability of TTA based on the insight that predictions on benign samples can be impacted by malicious samples in the same batch.
We propose Distribution Invading Attack (DIA), which injects a small fraction of malicious data into the test batch.
- Score: 41.19226800089764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, test-time adaptation (TTA) has been proposed as a promising
solution for addressing distribution shifts. It allows a base model to adapt to
an unforeseen distribution during inference by leveraging the information from
the batch of (unlabeled) test data. However, we uncover a novel security
vulnerability of TTA based on the insight that predictions on benign samples
can be impacted by malicious samples in the same batch. To exploit this
vulnerability, we propose Distribution Invading Attack (DIA), which injects a
small fraction of malicious data into the test batch. DIA causes models using
TTA to misclassify benign and unperturbed test data, providing an entirely new
capability for adversaries that is infeasible in canonical machine learning
pipelines. Through comprehensive evaluations, we demonstrate the high
effectiveness of our attack on multiple benchmarks across six TTA methods. In
response, we investigate two countermeasures to robustify the existing insecure
TTA implementations, following the principle of "security by design". Together,
we hope our findings can make the community aware of the utility-security
tradeoffs in deploying TTA and provide valuable insights for developing robust
TTA approaches.
Related papers
- On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning [49.17494657762375]
Test-time adaptation (TTA) updates the model weights during the inference stage using testing data to enhance generalization.
Existing studies have shown that when TTA is updated with crafted adversarial test samples, the performance on benign samples can deteriorate.
We propose an effective and realistic attack method that better produces poisoned samples without access to benign samples.
arXiv Detail & Related papers (2024-10-07T01:29:19Z) - MedBN: Robust Test-Time Adaptation against Malicious Test Samples [11.397666167665484]
Test-time adaptation (TTA) has emerged as a promising solution to address performance decay due to unforeseen distribution shifts between training and test data.
Previous studies have uncovered security vulnerabilities within TTA even when a small proportion of the test batch is maliciously manipulated.
We propose median batch normalization (MedBN), leveraging the robustness of the median for statistics estimation within the batch normalization layer during test-time inference.
arXiv Detail & Related papers (2024-03-28T11:33:02Z) - Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting [55.17761802332469]
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample.
Prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications.
We propose an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples.
arXiv Detail & Related papers (2024-03-18T05:49:45Z) - Diverse Data Augmentation with Diffusions for Effective Test-time Prompt
Tuning [73.75282761503581]
We propose DiffTPT, which leverages pre-trained diffusion models to generate diverse and informative new data.
Our experiments on test datasets with distribution shifts and unseen categories demonstrate that DiffTPT improves the zero-shot accuracy by an average of 5.13%.
arXiv Detail & Related papers (2023-08-11T09:36:31Z) - Test-Time Adaptation with Perturbation Consistency Learning [32.58879780726279]
We propose a simple test-time adaptation method to promote the model to make stable predictions for samples with distribution shifts.
Our method can achieve higher or comparable performance with less inference time over strong PLM backbones.
arXiv Detail & Related papers (2023-04-25T12:29:22Z) - Towards Stable Test-Time Adaptation in Dynamic Wild World [60.98073673220025]
Test-time adaptation (TTA) has shown to be effective at tackling distribution shifts between training and testing data by adapting a given model on test samples.
Online model updating of TTA may be unstable and this is often a key obstacle preventing existing TTA methods from being deployed in the real world.
arXiv Detail & Related papers (2023-02-24T02:03:41Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.