Uncovering Adversarial Risks of Test-Time Adaptation
- URL: http://arxiv.org/abs/2301.12576v1
- Date: Sun, 29 Jan 2023 22:58:05 GMT
- Title: Uncovering Adversarial Risks of Test-Time Adaptation
- Authors: Tong Wu, Feiran Jia, Xiangyu Qi, Jiachen T. Wang, Vikash Sehwag, Saeed
Mahloujifar, Prateek Mittal
- Abstract summary: Test-time adaptation (TTA) has been proposed as a promising solution for addressing distribution shifts.
We uncover a novel security vulnerability of TTA based on the insight that predictions on benign samples can be impacted by malicious samples in the same batch.
We propose Distribution Invading Attack (DIA), which injects a small fraction of malicious data into the test batch.
- Score: 41.19226800089764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, test-time adaptation (TTA) has been proposed as a promising
solution for addressing distribution shifts. It allows a base model to adapt to
an unforeseen distribution during inference by leveraging the information from
the batch of (unlabeled) test data. However, we uncover a novel security
vulnerability of TTA based on the insight that predictions on benign samples
can be impacted by malicious samples in the same batch. To exploit this
vulnerability, we propose Distribution Invading Attack (DIA), which injects a
small fraction of malicious data into the test batch. DIA causes models using
TTA to misclassify benign and unperturbed test data, providing an entirely new
capability for adversaries that is infeasible in canonical machine learning
pipelines. Through comprehensive evaluations, we demonstrate the high
effectiveness of our attack on multiple benchmarks across six TTA methods. In
response, we investigate two countermeasures to robustify the existing insecure
TTA implementations, following the principle of "security by design". Together,
we hope our findings can make the community aware of the utility-security
tradeoffs in deploying TTA and provide valuable insights for developing robust
TTA approaches.
Related papers
- Active Test-Time Adaptation: Theoretical Analyses and An Algorithm [51.84691955495693]
Test-time adaptation (TTA) addresses distribution shifts for streaming test data in unsupervised settings.
We propose the novel problem setting of active test-time adaptation (ATTA) that integrates active learning within the fully TTA setting.
arXiv Detail & Related papers (2024-04-07T22:31:34Z) - MedBN: Robust Test-Time Adaptation against Malicious Test Samples [11.397666167665484]
Test-time adaptation (TTA) has emerged as a promising solution to address performance decay due to unforeseen distribution shifts between training and test data.
Previous studies have uncovered security vulnerabilities within TTA even when a small proportion of the test batch is maliciously manipulated.
We propose median batch normalization (MedBN), leveraging the robustness of the median for statistics estimation within the batch normalization layer during test-time inference.
arXiv Detail & Related papers (2024-03-28T11:33:02Z) - Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting [55.17761802332469]
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample.
Prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications.
We propose an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples.
arXiv Detail & Related papers (2024-03-18T05:49:45Z) - Diverse Data Augmentation with Diffusions for Effective Test-time Prompt
Tuning [73.75282761503581]
We propose DiffTPT, which leverages pre-trained diffusion models to generate diverse and informative new data.
Our experiments on test datasets with distribution shifts and unseen categories demonstrate that DiffTPT improves the zero-shot accuracy by an average of 5.13%.
arXiv Detail & Related papers (2023-08-11T09:36:31Z) - Test-Time Adaptation with Perturbation Consistency Learning [32.58879780726279]
We propose a simple test-time adaptation method to promote the model to make stable predictions for samples with distribution shifts.
Our method can achieve higher or comparable performance with less inference time over strong PLM backbones.
arXiv Detail & Related papers (2023-04-25T12:29:22Z) - Towards Stable Test-Time Adaptation in Dynamic Wild World [60.98073673220025]
Test-time adaptation (TTA) has shown to be effective at tackling distribution shifts between training and testing data by adapting a given model on test samples.
Online model updating of TTA may be unstable and this is often a key obstacle preventing existing TTA methods from being deployed in the real world.
arXiv Detail & Related papers (2023-02-24T02:03:41Z) - Robust Question Answering against Distribution Shifts with Test-Time
Adaptation: An Empirical Study [24.34217596145152]
A deployed question answering (QA) model can easily fail when the test data has a distribution shift compared to the training data.
We evaluate test-time adaptation (TTA) to improve a model after deployment.
We also propose a novel TTA method called online imitation learning (OIL)
arXiv Detail & Related papers (2023-02-09T13:10:53Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.