MDViT: Multi-domain Vision Transformer for Small Medical Image Segmentation Datasets
- URL: http://arxiv.org/abs/2307.02100v3
- Date: Fri, 7 Jun 2024 08:44:54 GMT
- Title: MDViT: Multi-domain Vision Transformer for Small Medical Image Segmentation Datasets
- Authors: Siyi Du, Nourhan Bayasi, Ghassan Hamarneh, Rafeef Garbi,
- Abstract summary: Vision transformers (ViTs) have emerged as a promising solution to improve medical image segmentation (MIS)
ViTs are typically trained using a single source of data, which overlooks the valuable knowledge that could be leveraged from other available datasets.
In this paper, we propose MDViT, the first multi-domain ViT that includes domain adapters to mitigate data-hunger and combat NKT.
- Score: 19.44142290594537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite its clinical utility, medical image segmentation (MIS) remains a daunting task due to images' inherent complexity and variability. Vision transformers (ViTs) have recently emerged as a promising solution to improve MIS; however, they require larger training datasets than convolutional neural networks. To overcome this obstacle, data-efficient ViTs were proposed, but they are typically trained using a single source of data, which overlooks the valuable knowledge that could be leveraged from other available datasets. Naivly combining datasets from different domains can result in negative knowledge transfer (NKT), i.e., a decrease in model performance on some domains with non-negligible inter-domain heterogeneity. In this paper, we propose MDViT, the first multi-domain ViT that includes domain adapters to mitigate data-hunger and combat NKT by adaptively exploiting knowledge in multiple small data resources (domains). Further, to enhance representation learning across domains, we integrate a mutual knowledge distillation paradigm that transfers knowledge between a universal network (spanning all the domains) and auxiliary domain-specific branches. Experiments on 4 skin lesion segmentation datasets show that MDViT outperforms state-of-the-art algorithms, with superior segmentation performance and a fixed model size, at inference time, even as more domains are added. Our code is available at https://github.com/siyi-wind/MDViT.
Related papers
- Model-Contrastive Federated Domain Adaptation [3.9435648520559177]
Federated domain adaptation (FDA) aims to collaboratively transfer knowledge from source clients (domains) to the related but different target client.
We propose a model-based method named FDAC, aiming to address bf Federated bf Domain bf Adaptation based on bf Contrastive learning and Vision Transformer (ViT)
To the best of our knowledge, FDAC is the first attempt to learn transferable representations by manipulating the latent architecture of ViT under the federated setting.
arXiv Detail & Related papers (2023-05-07T23:48:03Z) - AADG: Automatic Augmentation for Domain Generalization on Retinal Image
Segmentation [1.0452185327816181]
We propose a data manipulation based domain generalization method, called Automated Augmentation for Domain Generalization (AADG)
Our AADG framework can effectively sample data augmentation policies that generate novel domains.
Our proposed AADG exhibits state-of-the-art generalization performance and outperforms existing approaches.
arXiv Detail & Related papers (2022-07-27T02:26:01Z) - Data Augmentation for Cross-Domain Named Entity Recognition [22.66649873447105]
We study cross-domain data augmentation for the named entity recognition task.
We propose a novel neural architecture to transform the data representation from a high-resource to a low-resource domain.
We show that transforming the data to the low-resource domain representation achieves significant improvements over only using data from high-resource domains.
arXiv Detail & Related papers (2021-09-04T00:50:55Z) - Variational Attention: Propagating Domain-Specific Knowledge for
Multi-Domain Learning in Crowd Counting [75.80116276369694]
In crowd counting, due to the problem of laborious labelling, it is perceived intractability of collecting a new large-scale dataset.
We resort to the multi-domain joint learning and propose a simple but effective Domain-specific Knowledge Propagating Network (DKPNet)
It is mainly achieved by proposing the novel Variational Attention(VA) technique for explicitly modeling the attention distributions for different domains.
arXiv Detail & Related papers (2021-08-18T08:06:37Z) - DARCNN: Domain Adaptive Region-based Convolutional Neural Network for
Unsupervised Instance Segmentation in Biomedical Images [4.3171602814387136]
We propose leveraging the wealth of annotations in benchmark computer vision datasets to conduct unsupervised instance segmentation for diverse biomedical datasets.
We propose a Domain Adaptive Region-based Convolutional Neural Network (DARCNN), that adapts knowledge of object definition from COCO to multiple biomedical datasets.
We showcase DARCNN's performance for unsupervised instance segmentation on numerous biomedical datasets.
arXiv Detail & Related papers (2021-04-03T06:54:33Z) - Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy
Minimisation for Multi-modal Cardiac Image Segmentation [10.417009344120917]
We present a novel UDA method for multi-modal cardiac image segmentation.
The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces.
We validated our method on two cardiac datasets by adapting from the annotated source domain to the unannotated target domain.
arXiv Detail & Related papers (2021-03-15T08:59:44Z) - Domain Adaptation for Learning Generator from Paired Few-Shot Data [72.04430033118426]
We propose a Paired Few-shot GAN (PFS-GAN) model for learning generators with sufficient source data and a few target data.
Our method has better quantitative and qualitative results on the generated target-domain data with higher diversity in comparison to several baselines.
arXiv Detail & Related papers (2021-02-25T10:11:44Z) - DoFE: Domain-oriented Feature Embedding for Generalizable Fundus Image
Segmentation on Unseen Datasets [96.92018649136217]
We present a novel Domain-oriented Feature Embedding (DoFE) framework to improve the generalization ability of CNNs on unseen target domains.
Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains.
Our framework generates satisfying segmentation results on unseen datasets and surpasses other domain generalization and network regularization methods.
arXiv Detail & Related papers (2020-10-13T07:28:39Z) - Learning to Combine: Knowledge Aggregation for Multi-Source Domain
Adaptation [56.694330303488435]
We propose a Learning to Combine for Multi-Source Domain Adaptation (LtC-MSDA) framework.
In the nutshell, a knowledge graph is constructed on the prototypes of various domains to realize the information propagation among semantically adjacent representations.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-07-17T07:52:44Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.