Label Distribution Shift-Aware Prediction Refinement for Test-Time Adaptation
- URL: http://arxiv.org/abs/2411.15204v1
- Date: Wed, 20 Nov 2024 05:58:52 GMT
- Title: Label Distribution Shift-Aware Prediction Refinement for Test-Time Adaptation
- Authors: Minguk Jang, Hye Won Chung,
- Abstract summary: We introduce label Distribution shift-Aware prediction Refinement for Test-time adaptation (DART)
DART is a novel TTA method that refines predictions by focusing on class-wise confusion patterns.
Our method exhibits 5-18% gains in accuracy under label distribution shifts on CIFAR-10C, without any performance degradation when there is no label distribution shift.
- Score: 12.231303059518229
- License:
- Abstract: Test-time adaptation (TTA) is an effective approach to mitigate performance degradation of trained models when encountering input distribution shifts at test time. However, existing TTA methods often suffer significant performance drops when facing additional class distribution shifts. We first analyze TTA methods under label distribution shifts and identify the presence of class-wise confusion patterns commonly observed across different covariate shifts. Based on this observation, we introduce label Distribution shift-Aware prediction Refinement for Test-time adaptation (DART), a novel TTA method that refines the predictions by focusing on class-wise confusion patterns. DART trains a prediction refinement module during an intermediate time by exposing it to several batches with diverse class distributions using the training dataset. This module is then used during test time to detect and correct class distribution shifts, significantly improving pseudo-label accuracy for test data. Our method exhibits 5-18% gains in accuracy under label distribution shifts on CIFAR-10C, without any performance degradation when there is no label distribution shift. Extensive experiments on CIFAR, PACS, OfficeHome, and ImageNet benchmarks demonstrate DART's ability to correct inaccurate predictions caused by test-time distribution shifts. This improvement leads to enhanced performance in existing TTA methods, making DART a valuable plug-in tool.
Related papers
- DOTA: Distributional Test-Time Adaptation of Vision-Language Models [52.98590762456236]
Training-free test-time dynamic adapter (TDA) is a promising approach to address this issue.
We propose a simple yet effective method for DistributiOnal Test-time Adaptation (Dota)
Dota continually estimates the distributions of test samples, allowing the model to continually adapt to the deployment environment.
arXiv Detail & Related papers (2024-09-28T15:03:28Z) - Temporal Test-Time Adaptation with State-Space Models [4.248760709042802]
Adapting a model on test samples can help mitigate this drop in performance.
Most test-time adaptation methods have focused on synthetic corruption shifts.
We propose STAD, a probabilistic state-space model that adapts a deployed model to temporal distribution shifts.
arXiv Detail & Related papers (2024-07-17T11:18:49Z) - AdapTable: Test-Time Adaptation for Tabular Data via Shift-Aware Uncertainty Calibrator and Label Distribution Handler [29.395855812763617]
We propose AdapTable, a framework for adapting machine learning models to target data without accessing source data.
AdapTable operates in two stages: 1) calibrating model predictions using a shift-aware uncertainty calibrator, and 2) adjusting these predictions to match the target label distribution with a label distribution handler.
Our results demonstrate AdapTable's ability to handle various real-world distribution shifts, achieving up to a 16% improvement on the dataset.
arXiv Detail & Related papers (2024-07-15T15:02:53Z) - Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting [55.17761802332469]
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample.
Prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications.
We propose an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples.
arXiv Detail & Related papers (2024-03-18T05:49:45Z) - Channel-Selective Normalization for Label-Shift Robust Test-Time Adaptation [16.657929958093824]
Test-time adaptation is an approach to adjust models to a new data distribution during inference.
Test-time batch normalization is a simple and popular method that achieved compelling performance on domain shift benchmarks.
We propose to tackle this challenge by only selectively adapting channels in a deep network, minimizing drastic adaptation that is sensitive to label shifts.
arXiv Detail & Related papers (2024-02-07T15:41:01Z) - Label Shift Adapter for Test-Time Adaptation under Covariate and Label
Shifts [48.83127071288469]
Test-time adaptation (TTA) aims to adapt a pre-trained model to the target domain in a batch-by-batch manner during inference.
Most previous TTA approaches assume that both source and target domain datasets have balanced label distribution.
We propose a novel label shift adapter that can be incorporated into existing TTA approaches to deal with label shifts effectively.
arXiv Detail & Related papers (2023-08-17T06:37:37Z) - TeST: Test-time Self-Training under Distribution Shift [99.68465267994783]
Test-Time Self-Training (TeST) is a technique that takes as input a model trained on some source data and a novel data distribution at test time.
We find that models adapted using TeST significantly improve over baseline test-time adaptation algorithms.
arXiv Detail & Related papers (2022-09-23T07:47:33Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Training on Test Data with Bayesian Adaptation for Covariate Shift [96.3250517412545]
Deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.
We derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters.
We show that our method improves both accuracy and uncertainty estimation.
arXiv Detail & Related papers (2021-09-27T01:09:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.