Test-time Adaptation vs. Training-time Generalization: A Case Study in
Human Instance Segmentation using Keypoints Estimation
- URL: http://arxiv.org/abs/2212.06242v1
- Date: Mon, 12 Dec 2022 20:56:25 GMT
- Title: Test-time Adaptation vs. Training-time Generalization: A Case Study in
Human Instance Segmentation using Keypoints Estimation
- Authors: Kambiz Azarian, Debasmit Das, Hyojin Park, Fatih Porikli
- Abstract summary: We consider the problem of improving the human instance segmentation mask quality for a given test image using keypoints estimation.
The first approach is a test-time adaptation (TTA) method, where we allow test-time modification of the segmentation network's weights using a single unlabeled test image.
The second approach is a training-time generalization (TTG) method, where we permit offline access to the labeled source dataset but not the test-time modification of weights.
- Score: 48.30744831719513
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of improving the human instance segmentation mask
quality for a given test image using keypoints estimation. We compare two
alternative approaches. The first approach is a test-time adaptation (TTA)
method, where we allow test-time modification of the segmentation network's
weights using a single unlabeled test image. In this approach, we do not assume
test-time access to the labeled source dataset. More specifically, our TTA
method consists of using the keypoints estimates as pseudo labels and
backpropagating them to adjust the backbone weights. The second approach is a
training-time generalization (TTG) method, where we permit offline access to
the labeled source dataset but not the test-time modification of weights.
Furthermore, we do not assume the availability of any images from or knowledge
about the target domain. Our TTG method consists of augmenting the backbone
features with those generated by the keypoints head and feeding the aggregate
vector to the mask head. Through a comprehensive set of ablations, we evaluate
both approaches and identify several factors limiting the TTA gains. In
particular, we show that in the absence of a significant domain shift, TTA may
hurt and TTG show only a small gain in performance, whereas for a large domain
shift, TTA gains are smaller and dependent on the heuristics used, while TTG
gains are larger and robust to architectural choices.
Related papers
- Domain-Specific Block Selection and Paired-View Pseudo-Labeling for Online Test-Time Adaptation [6.64353332639395]
Test-time adaptation (TTA) aims to adapt a pre-trained model to a new test domain without access to source data after deployment.
Existing approaches rely on self-training with pseudo-labels since ground-truth cannot be obtained from test data.
We propose DPLOT, a simple yet effective TTA framework that consists of two components: (1) domain-specific block selection and (2) pseudo-label generation using paired-view images.
arXiv Detail & Related papers (2024-04-17T00:21:36Z) - Medical Image Segmentation with InTEnt: Integrated Entropy Weighting for
Single Image Test-Time Adaptation [6.964589353845092]
Test-time adaptation (TTA) refers to adapting a trained model to a new domain during testing.
Here, we propose to adapt a medical image segmentation model with only a single unlabeled test image.
Our method, validated on 24 source/target domain splits across 3 medical image datasets surpasses the leading method by 2.9% Dice coefficient on average.
arXiv Detail & Related papers (2024-02-14T22:26:07Z) - Decoupled Prototype Learning for Reliable Test-Time Adaptation [50.779896759106784]
Test-time adaptation (TTA) is a task that continually adapts a pre-trained source model to the target domain during inference.
One popular approach involves fine-tuning model with cross-entropy loss according to estimated pseudo-labels.
This study reveals that minimizing the classification error of each sample causes the cross-entropy loss's vulnerability to label noise.
We propose a novel Decoupled Prototype Learning (DPL) method that features prototype-centric loss computation.
arXiv Detail & Related papers (2024-01-15T03:33:39Z) - Few Clicks Suffice: Active Test-Time Adaptation for Semantic
Segmentation [14.112999441288615]
Test-time adaptation (TTA) adapts pre-trained models during inference using unlabeled test data.
There is still a significant performance gap between the TTA approaches and their supervised counterparts.
We propose ATASeg framework, which consists of two parts, i.e., model adapter and label annotator.
arXiv Detail & Related papers (2023-12-04T12:16:02Z) - Benchmarking Test-Time Adaptation against Distribution Shifts in Image
Classification [77.0114672086012]
Test-time adaptation (TTA) is a technique aimed at enhancing the generalization performance of models by leveraging unlabeled samples solely during prediction.
We present a benchmark that systematically evaluates 13 prominent TTA methods and their variants on five widely used image classification datasets.
arXiv Detail & Related papers (2023-07-06T16:59:53Z) - AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation [64.9230895853942]
Domain generalization can be arbitrarily hard without exploiting target domain information.
Test-time adaptive (TTA) methods are proposed to address this issue.
In this work, we adopt Non-Parametric to perform the test-time Adaptation (AdaNPC)
arXiv Detail & Related papers (2023-04-25T04:23:13Z) - Improved Test-Time Adaptation for Domain Generalization [48.239665441875374]
Test-time training (TTT) adapts the learned model with test data.
This work addresses two main factors: selecting an appropriate auxiliary TTT task for updating and identifying reliable parameters to update during the test phase.
We introduce additional adaptive parameters for the trained model, and we suggest only updating the adaptive parameters during the test phase.
arXiv Detail & Related papers (2023-04-10T10:12:38Z) - Feature Alignment and Uniformity for Test Time Adaptation [8.209137567840811]
Test time adaptation aims to adapt deep neural networks when receiving out of distribution test domain samples.
In this setting, the model can only access online unlabeled test samples and pre-trained models on the training domains.
arXiv Detail & Related papers (2023-03-20T06:44:49Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.