On-the-Fly Test-time Adaptation for Medical Image Segmentation
- URL: http://arxiv.org/abs/2203.05574v1
- Date: Thu, 10 Mar 2022 18:51:29 GMT
- Title: On-the-Fly Test-time Adaptation for Medical Image Segmentation
- Authors: Jeya Maria Jose Valanarasu, Pengfei Guo, Vibashan VS, and Vishal M.
Patel
- Abstract summary: Adapting the source model to target data distribution at test-time is an efficient solution for the data-shift problem.
We propose a new framework called Adaptive UNet where each convolutional block is equipped with an adaptive batch normalization layer.
During test-time, the model takes in just the new test image and generates a domain code to adapt the features of source model according to the test data.
- Score: 63.476899335138164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One major problem in deep learning-based solutions for medical imaging is the
drop in performance when a model is tested on a data distribution different
from the one that it is trained on. Adapting the source model to target data
distribution at test-time is an efficient solution for the data-shift problem.
Previous methods solve this by adapting the model to target distribution by
using techniques like entropy minimization or regularization. In these methods,
the models are still updated by back-propagation using an unsupervised loss on
complete test data distribution. In real-world clinical settings, it makes more
sense to adapt a model to a new test image on-the-fly and avoid model update
during inference due to privacy concerns and lack of computing resource at
deployment. To this end, we propose a new setting - On-the-Fly Adaptation which
is zero-shot and episodic (i.e., the model is adapted to a single image at a
time and also does not perform any back-propagation during test-time). To
achieve this, we propose a new framework called Adaptive UNet where each
convolutional block is equipped with an adaptive batch normalization layer to
adapt the features with respect to a domain code. The domain code is generated
using a pre-trained encoder trained on a large corpus of medical images. During
test-time, the model takes in just the new test image and generates a domain
code to adapt the features of source model according to the test data. We
validate the performance on both 2D and 3D data distribution shifts where we
get a better performance compared to previous test-time adaptation methods.
Code is available at https://github.com/jeya-maria-jose/On-The-Fly-Adaptation
Related papers
- Source-Free Test-Time Adaptation For Online Surface-Defect Detection [29.69030283193086]
We propose a novel test-time adaptation surface-defect detection approach.
It adapts pre-trained models to new domains and classes during inference.
Experiments demonstrate it outperforms state-of-the-art techniques.
arXiv Detail & Related papers (2024-08-18T14:24:05Z) - Each Test Image Deserves A Specific Prompt: Continual Test-Time Adaptation for 2D Medical Image Segmentation [14.71883381837561]
Cross-domain distribution shift is a significant obstacle to deploying the pre-trained semantic segmentation model in real-world applications.
Test-time adaptation has proven its effectiveness in tackling the cross-domain distribution shift during inference.
We propose the Visual Prompt-based Test-Time Adaptation (VPTTA) method to train a specific prompt for each test image to align the statistics in the batch normalization layers.
arXiv Detail & Related papers (2023-11-30T09:03:47Z) - Back to the Source: Diffusion-Driven Test-Time Adaptation [77.4229736436935]
Test-time adaptation harnesses test inputs to improve accuracy of a model trained on source data when tested on shifted target data.
We instead update the target data, by projecting all test inputs toward the source domain with a generative diffusion model.
arXiv Detail & Related papers (2022-07-07T17:14:10Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - SITA: Single Image Test-time Adaptation [48.789568233682296]
In Test-time Adaptation (TTA), given a model trained on some source data, the goal is to adapt it to make better predictions for test instances from a different distribution.
We consider TTA in a more pragmatic setting which we refer to as SITA (Single Image Test-time Adaptation)
Here, when making each prediction, the model has access only to the given single test instance, rather than a batch of instances.
We propose a novel approach AugBN for the SITA setting that requires only forward-preserving propagation.
arXiv Detail & Related papers (2021-12-04T15:01:35Z) - MT3: Meta Test-Time Training for Self-Supervised Test-Time Adaption [69.76837484008033]
An unresolved problem in Deep Learning is the ability of neural networks to cope with domain shifts during test-time.
We combine meta-learning, self-supervision and test-time training to learn to adapt to unseen test distributions.
Our approach significantly improves the state-of-the-art results on the CIFAR-10-Corrupted image classification benchmark.
arXiv Detail & Related papers (2021-03-30T09:33:38Z) - Tent: Fully Test-time Adaptation by Entropy Minimization [77.85911673550851]
A model must adapt itself to generalize to new and different data during testing.
In this setting of fully test-time adaptation the model has only the test data and its own parameters.
We propose to adapt by test entropy minimization (tent): we optimize the model for confidence as measured by the entropy of its predictions.
arXiv Detail & Related papers (2020-06-18T17:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.