Unsupervised Adversarial Domain Adaptation For Barrett's Segmentation
- URL: http://arxiv.org/abs/2012.05316v1
- Date: Wed, 9 Dec 2020 20:59:25 GMT
- Title: Unsupervised Adversarial Domain Adaptation For Barrett's Segmentation
- Authors: Numan Celik, Soumya Gupta, Sharib Ali, Jens Rittscher
- Abstract summary: Automated segmentation can help clinical endoscopists to assess and treat Barrett's oesophagus (BE) area more accurately.
Supervised models require large amount of manual annotations incorporating all data variability in the training data.
In this work, we aim to alleviate this problem by applying an unsupervised domain adaptation technique (UDA)
Our results show that the UDA-based approach outperforms traditional supervised U-Net segmentation by nearly 10% on both Dice similarity coefficient and intersection-over-union.
- Score: 0.8602553195689513
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Barrett's oesophagus (BE) is one of the early indicators of esophageal
cancer. Patients with BE are monitored and undergo ablation therapies to
minimise the risk, thereby making it eminent to identify the BE area precisely.
Automated segmentation can help clinical endoscopists to assess and treat BE
area more accurately. Endoscopy imaging of BE can include multiple modalities
in addition to the conventional white light (WL) modality. Supervised models
require large amount of manual annotations incorporating all data variability
in the training data. However, it becomes cumbersome, tedious and labour
intensive work to generate manual annotations, and additionally modality
specific expertise is required. In this work, we aim to alleviate this problem
by applying an unsupervised domain adaptation technique (UDA). Here, UDA is
trained on white light endoscopy images as source domain and are well-adapted
to generalise to produce segmentation on different imaging modalities as target
domain, namely narrow band imaging and post acetic-acid WL imaging. Our dataset
consists of a total of 871 images consisting of both source and target domains.
Our results show that the UDA-based approach outperforms traditional supervised
U-Net segmentation by nearly 10% on both Dice similarity coefficient and
intersection-over-union.
Related papers
- Domain Generalization for Endoscopic Image Segmentation by Disentangling Style-Content Information and SuperPixel Consistency [1.4991956341367338]
We propose an approach for style-content disentanglement using instance normalization and instance selective whitening (ISW) for improved domain generalization.
We evaluate our approach on two datasets: EndoUDA Barrett's Esophagus and EndoUDA polyps, and compare its performance with three state-of-the-art (SOTA) methods.
arXiv Detail & Related papers (2024-09-19T04:10:04Z) - ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic
Diffusion Models [69.9178140563928]
Colonoscopy analysis is essential for assisting clinical diagnosis and treatment.
The scarcity of annotated data limits the effectiveness and generalization of existing methods.
We propose an Adaptive Refinement Semantic Diffusion Model (ArSDM) to generate colonoscopy images that benefit the downstream tasks.
arXiv Detail & Related papers (2023-09-03T07:55:46Z) - SDC-UDA: Volumetric Unsupervised Domain Adaptation Framework for
Slice-Direction Continuous Cross-Modality Medical Image Segmentation [8.33996223844639]
We propose SDC-UDA, a framework for slice-direction continuous cross-modality medical image segmentation.
It combines intra- and inter-slice self-attentive image translation, uncertainty-constrained pseudo-label refinement, and volumetric self-training.
We validate SDC-UDA with multiple publicly available cross-modality medical image segmentation datasets and achieve state-of-the-art segmentation performance.
arXiv Detail & Related papers (2023-05-18T14:44:27Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - SUPRA: Superpixel Guided Loss for Improved Multi-modal Segmentation in
Endoscopy [1.1470070927586016]
Domain shift is a well-known problem in the medical imaging community.
In this paper, we explore the domain generalisation technique to enable DL methods to be used in such scenarios.
We show that our method yields an improvement of nearly 20% in the target domain set compared to the baseline.
arXiv Detail & Related papers (2022-11-09T03:13:59Z) - AADG: Automatic Augmentation for Domain Generalization on Retinal Image
Segmentation [1.0452185327816181]
We propose a data manipulation based domain generalization method, called Automated Augmentation for Domain Generalization (AADG)
Our AADG framework can effectively sample data augmentation policies that generate novel domains.
Our proposed AADG exhibits state-of-the-art generalization performance and outperforms existing approaches.
arXiv Detail & Related papers (2022-07-27T02:26:01Z) - EndoUDA: A modality independent segmentation approach for endoscopy
imaging [0.7874708385247353]
We propose a novel UDA-based segmentation method that couples the variational autoencoder and U-Net with a common EfficientNet-B4 backbone.
We show that our model can generalize to unseen target NBI (target) modality when trained using only WLI (source) modality.
arXiv Detail & Related papers (2021-07-12T11:57:33Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Collaborative Unsupervised Domain Adaptation for Medical Image Diagnosis [102.40869566439514]
We seek to exploit rich labeled data from relevant domains to help the learning in the target task via Unsupervised Domain Adaptation (UDA)
Unlike most UDA methods that rely on clean labeled data or assume samples are equally transferable, we innovatively propose a Collaborative Unsupervised Domain Adaptation algorithm.
We theoretically analyze the generalization performance of the proposed method, and also empirically evaluate it on both medical and general images.
arXiv Detail & Related papers (2020-07-05T11:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.