Test-Time Adaptation with Shape Moments for Image Segmentation
- URL: http://arxiv.org/abs/2205.07983v1
- Date: Mon, 16 May 2022 20:47:13 GMT
- Title: Test-Time Adaptation with Shape Moments for Image Segmentation
- Authors: Mathilde Bateson, Herv\'e Lombaert, Ismail Ben Ayed
- Abstract summary: We investigate test-time single-subject adaptation for segmentation.
We propose a Shape-guided Entropy Minimization objective for tackling this task.
We show the potential of integrating various shape priors to guide adaptation to plausible solutions.
- Score: 16.794050614196916
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Supervised learning is well-known to fail at generalization under
distribution shifts. In typical clinical settings, the source data is
inaccessible and the target distribution is represented with a handful of
samples: adaptation can only happen at test time on a few or even a single
subject(s). We investigate test-time single-subject adaptation for
segmentation, and propose a Shape-guided Entropy Minimization objective for
tackling this task. During inference for a single testing subject, our loss is
minimized with respect to the batch normalization's scale and bias parameters.
We show the potential of integrating various shape priors to guide adaptation
to plausible solutions, and validate our method in two challenging scenarios:
MRI-to-CT adaptation of cardiac segmentation and cross-site adaptation of
prostate segmentation. Our approach exhibits substantially better performances
than the existing test-time adaptation methods. Even more surprisingly, it
fares better than state-of-the-art domain adaptation methods, although it
forgoes training on additional target data during adaptation. Our results
question the usefulness of training on target data in segmentation adaptation,
and points to the substantial effect of shape priors on test-time inference.
Our framework can be readily used for integrating various priors and for
adapting any segmentation network, and our code is available.
Related papers
- Adaptive Cascading Network for Continual Test-Time Adaptation [12.718826132518577]
We study the problem of continual test-time adaption where the goal is to adapt a source pre-trained model to a sequence of unlabelled target domains at test time.
Existing methods on test-time training suffer from several limitations.
arXiv Detail & Related papers (2024-07-17T01:12:57Z) - Adaptive scheduling for adaptive sampling in POS taggers construction [0.27624021966289597]
We introduce an adaptive scheduling for adaptive sampling as a novel way of machine learning in the construction of part-of-speech taggers.
We analyze the shape of the learning curve geometrically in conjunction with a functional model to increase or decrease it at any time.
We also improve the robustness of sampling by paying greater attention to those regions of the training data base subject to a temporary inflation in performance.
arXiv Detail & Related papers (2024-02-04T15:02:17Z) - Test-Time Training for Semantic Segmentation with Output Contrastive
Loss [12.535720010867538]
Deep learning-based segmentation models have achieved impressive performance on public benchmarks, but generalizing well to unseen environments remains a major challenge.
This paper introduces Contrastive Loss (OCL), known for its capability to learn robust and generalized representations, to stabilize the adaptation process.
Our method excels even when applied to models initially pre-trained using domain adaptation methods on test domain data, showcasing its resilience and adaptability.
arXiv Detail & Related papers (2023-11-14T03:13:47Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Few-Shot Adaptation of Pre-Trained Networks for Domain Shift [17.123505029637055]
Deep networks are prone to performance degradation when there is a domain shift between the source (training) data and target (test) data.
Recent test-time adaptation methods update batch normalization layers of pre-trained source models deployed in new target environments with streaming data to mitigate such performance degradation.
We propose a framework for few-shot domain adaptation to address the practical challenges of data-efficient adaptation.
arXiv Detail & Related papers (2022-05-30T16:49:59Z) - DLTTA: Dynamic Learning Rate for Test-time Adaptation on Cross-domain
Medical Images [56.72015587067494]
We propose a novel dynamic learning rate adjustment method for test-time adaptation, called DLTTA.
Our method achieves effective and fast test-time adaptation with consistent performance improvement over current state-of-the-art test-time adaptation methods.
arXiv Detail & Related papers (2022-05-27T02:34:32Z) - Continual Test-Time Domain Adaptation [94.51284735268597]
Test-time domain adaptation aims to adapt a source pre-trained model to a target domain without using any source data.
CoTTA is easy to implement and can be readily incorporated in off-the-shelf pre-trained models.
arXiv Detail & Related papers (2022-03-25T11:42:02Z) - Unsupervised neural adaptation model based on optimal transport for
spoken language identification [54.96267179988487]
Due to the mismatch of statistical distributions of acoustic speech between training and testing sets, the performance of spoken language identification (SLID) could be drastically degraded.
We propose an unsupervised neural adaptation model to deal with the distribution mismatch problem for SLID.
arXiv Detail & Related papers (2020-12-24T07:37:19Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.