MDD-UNet: Domain Adaptation for Medical Image Segmentation with
Theoretical Guarantees, a Proof of Concept
- URL: http://arxiv.org/abs/2312.12246v1
- Date: Tue, 19 Dec 2023 15:30:10 GMT
- Title: MDD-UNet: Domain Adaptation for Medical Image Segmentation with
Theoretical Guarantees, a Proof of Concept
- Authors: Asbj{\o}rn Munk, Ao Ma, Mads Nielsen
- Abstract summary: We propose an unsupervised domain adaptation framework for U-Nets with theoretical guarantees.
We find that the MDD-UNet is able to learn features which are domain-invariant with no knowledge about the labels in the target domain.
This work serves as a proof of concept by demonstrating an improvement on the U-Net in it's standard form without modern enhancements.
- Score: 3.376269351435396
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The current state-of-the art techniques for image segmentation are often
based on U-Net architectures, a U-shaped encoder-decoder networks with skip
connections. Despite the powerful performance, the architecture often does not
perform well when used on data which has different characteristics than the
data it was trained on. Many techniques for improving performance in the
presence of domain shift have been developed, however typically only have loose
connections to the theory of domain adaption. In this work, we propose an
unsupervised domain adaptation framework for U-Nets with theoretical guarantees
based on the Margin Disparity Discrepancy [1] called the MDD-UNet. We evaluate
the proposed technique on the task of hippocampus segmentation, and find that
the MDD-UNet is able to learn features which are domain-invariant with no
knowledge about the labels in the target domain. The MDD-UNet improves
performance over the standard U-Net on 11 out of 12 combinations of datasets.
This work serves as a proof of concept by demonstrating an improvement on the
U-Net in it's standard form without modern enhancements, which opens up a new
avenue of studying domain adaptation for models with very large hypothesis
spaces from both methodological and practical perspectives. Code is available
at https://github.com/asbjrnmunk/mdd-unet.
Related papers
- A Study on Unsupervised Domain Adaptation for Semantic Segmentation in the Era of Vision-Language Models [1.2499537119440245]
Domain shifts are one of the major challenges in deep learning based computer vision.
UDA methods have emerged which adapt a model to a new target domain by only using unlabeled data of that domain.
Recent vision-language models have demonstrated strong generalization capabilities which may facilitate domain adaptation.
We show that replacing the encoder of existing UDA methods by a vision-language pre-trained encoder can result in significant performance improvements.
arXiv Detail & Related papers (2024-11-25T14:12:24Z) - Memory-Efficient Pseudo-Labeling for Online Source-Free Universal Domain Adaptation using a Gaussian Mixture Model [3.1265626879839923]
In practice, domain shifts are likely to occur between training and test data, necessitating domain adaptation (DA) to adjust the pre-trained source model to the target domain.
UniDA has gained attention for addressing the possibility of an additional category (label) shift between the source and target domain.
We propose a novel method that continuously captures the distribution of known classes in the feature space using a Gaussian mixture model (GMM)
Our approach achieves state-of-the-art results in all experiments on the DomainNet and Office-Home datasets.
arXiv Detail & Related papers (2024-07-19T11:13:31Z) - Make the U in UDA Matter: Invariant Consistency Learning for
Unsupervised Domain Adaptation [86.61336696914447]
We dub our approach "Invariant CONsistency learning" (ICON)
We propose to make the U in Unsupervised DA matter by giving equal status to the two domains.
ICON achieves the state-of-the-art performance on the classic UDA benchmarks: Office-Home and VisDA-2017, and outperforms all the conventional methods on the challenging WILDS 2.0 benchmark.
arXiv Detail & Related papers (2023-09-22T09:43:32Z) - FIXED: Frustratingly Easy Domain Generalization with Mixup [53.782029033068675]
Domain generalization (DG) aims to learn a generalizable model from multiple training domains such that it can perform well on unseen target domains.
A popular strategy is to augment training data to benefit generalization through methods such as Mixupcitezhang 2018mixup.
We propose a simple yet effective enhancement for Mixup-based DG, namely domain-invariant Feature mIXup (FIX)
Our approach significantly outperforms nine state-of-the-art related methods, beating the best performing baseline by 6.5% on average in terms of test accuracy.
arXiv Detail & Related papers (2022-11-07T09:38:34Z) - Disentangled Modeling of Domain and Relevance for Adaptable Dense
Retrieval [54.349418995689284]
We propose a novel Dense Retrieval (DR) framework named Disentangled Dense Retrieval ( DDR) to support effective domain adaptation for DR models.
By making the REM and DAMs disentangled, DDR enables a flexible training paradigm in which REM is trained with supervision once and DAMs are trained with unsupervised data.
DDR significantly improves ranking performance compared to strong DR baselines and substantially outperforms traditional retrieval methods in most scenarios.
arXiv Detail & Related papers (2022-08-11T11:18:50Z) - Domain-Agnostic Prior for Transfer Semantic Segmentation [197.9378107222422]
Unsupervised domain adaptation (UDA) is an important topic in the computer vision community.
We present a mechanism that regularizes cross-domain representation learning with a domain-agnostic prior (DAP)
Our research reveals that UDA benefits much from better proxies, possibly from other data modalities.
arXiv Detail & Related papers (2022-04-06T09:13:25Z) - Geometry-Aware Unsupervised Domain Adaptation [12.298214579392129]
Unsupervised Domain Adaptation (UDA) aims to transfer the knowledge from the labeled source domain to the unlabeled target domain in the presence of dataset shift.
Most existing methods cannot address the domain alignment and class discrimination well, which may distort the intrinsic data structure for downstream tasks.
We propose a novel geometry-aware model to learn the transferability and discriminability simultaneously via nuclear norm optimization.
arXiv Detail & Related papers (2021-12-21T08:45:42Z) - DAFormer: Improving Network Architectures and Training Strategies for
Domain-Adaptive Semantic Segmentation [99.88539409432916]
We study the unsupervised domain adaptation (UDA) process.
We propose a novel UDA method, DAFormer, based on the benchmark results.
DAFormer significantly improves the state-of-the-art performance by 10.8 mIoU for GTA->Cityscapes and 5.4 mIoU for Synthia->Cityscapes.
arXiv Detail & Related papers (2021-11-29T19:00:46Z) - A New Bidirectional Unsupervised Domain Adaptation Segmentation
Framework [27.13101555533594]
unsupervised domain adaptation (UDA) techniques are proposed to bridge the gap between different domains.
In this paper, we propose a bidirectional UDA framework based on disentangled representation learning for equally competent two-way UDA performances.
arXiv Detail & Related papers (2021-08-18T05:25:11Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Continual Unsupervised Domain Adaptation for Semantic Segmentation [14.160280479726921]
Unsupervised Domain Adaptation (UDA) for semantic segmentation has been favorably applied to real-world scenarios in which pixel-level labels are hard to be obtained.
We propose Continual UDA for semantic segmentation based on a newly designed Expanding Target-specific Memory (ETM) framework.
Our novel ETM framework contains Target-specific Memory (TM) for each target domain to alleviate catastrophic forgetting.
arXiv Detail & Related papers (2020-10-19T05:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.