Deep learning based domain adaptation for mitochondria segmentation on
EM volumes
- URL: http://arxiv.org/abs/2202.10773v1
- Date: Tue, 22 Feb 2022 09:49:25 GMT
- Title: Deep learning based domain adaptation for mitochondria segmentation on
EM volumes
- Authors: Daniel Franco-Barranco and Julio Pastor-Tronch and Aitor
Gonzalez-Marfil and Arrate Mu\~noz-Barrutia and Ignacio Arganda-Carreras
- Abstract summary: We present three unsupervised domain adaptation strategies to improve mitochondria segmentation in the target domain.
We propose a new training stopping criterion based on morphological priors obtained exclusively in the source domain.
In the absence of validation labels, monitoring our proposed morphology-based metric is an intuitive and effective way to stop the training process and select in average optimal models.
- Score: 5.682594415267948
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate segmentation of electron microscopy (EM) volumes of the brain is
essential to characterize neuronal structures at a cell or organelle level.
While supervised deep learning methods have led to major breakthroughs in that
direction during the past years, they usually require large amounts of
annotated data to be trained, and perform poorly on other data acquired under
similar experimental and imaging conditions. This is a problem known as domain
adaptation, since models that learned from a sample distribution (or source
domain) struggle to maintain their performance on samples extracted from a
different distribution or target domain. In this work, we address the complex
case of deep learning based domain adaptation for mitochondria segmentation
across EM datasets from different tissues and species. We present three
unsupervised domain adaptation strategies to improve mitochondria segmentation
in the target domain based on (1) state-of-the-art style transfer between
images of both domains; (2) self-supervised learning to pre-train a model using
unlabeled source and target images, and then fine-tune it only with the source
labels; and (3) multi-task neural network architectures trained end-to-end with
both labeled and unlabeled images. Additionally, we propose a new training
stopping criterion based on morphological priors obtained exclusively in the
source domain. We carried out all possible cross-dataset experiments using
three publicly available EM datasets. We evaluated our proposed strategies on
the mitochondria semantic labels predicted on the target datasets. The methods
introduced here outperform the baseline methods and compare favorably to the
state of the art. In the absence of validation labels, monitoring our proposed
morphology-based metric is an intuitive and effective way to stop the training
process and select in average optimal models.
Related papers
- Robust Source-Free Domain Adaptation for Fundus Image Segmentation [3.585032903685044]
Unlabelled Domain Adaptation (UDA) is a learning technique that transfers knowledge learned in the source domain from labelled data to the target domain with only unlabelled data.
In this study, we propose a two-stage training stage for robust domain adaptation.
We propose a novel robust pseudo-label and pseudo-boundary (PLPB) method, which effectively utilizes unlabeled target data to generate pseudo labels and pseudo boundaries.
arXiv Detail & Related papers (2023-10-25T14:25:18Z) - Anatomy-guided domain adaptation for 3D in-bed human pose estimation [62.3463429269385]
3D human pose estimation is a key component of clinical monitoring systems.
We present a novel domain adaptation method, adapting a model from a labeled source to a shifted unlabeled target domain.
Our method consistently outperforms various state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2022-11-22T11:34:51Z) - Stacking Ensemble Learning in Deep Domain Adaptation for Ophthalmic
Image Classification [61.656149405657246]
Domain adaptation is effective in image classification tasks where obtaining sufficient label data is challenging.
We propose a novel method, named SELDA, for stacking ensemble learning via extending three domain adaptation methods.
The experimental results using Age-Related Eye Disease Study (AREDS) benchmark ophthalmic dataset demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2022-09-27T14:19:00Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy
Minimisation for Multi-modal Cardiac Image Segmentation [10.417009344120917]
We present a novel UDA method for multi-modal cardiac image segmentation.
The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces.
We validated our method on two cardiac datasets by adapting from the annotated source domain to the unannotated target domain.
arXiv Detail & Related papers (2021-03-15T08:59:44Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Siloed Federated Learning for Multi-Centric Histopathology Datasets [0.17842332554022694]
This paper proposes a novel federated learning approach for deep learning architectures in the medical domain.
Local-statistic batch normalization (BN) layers are introduced, resulting in collaboratively-trained, yet center-specific models.
We benchmark the proposed method on the classification of tumorous histopathology image patches extracted from the Camelyon16 and Camelyon17 datasets.
arXiv Detail & Related papers (2020-08-17T15:49:30Z) - Few shot domain adaptation for in situ macromolecule structural
classification in cryo-electron tomograms [13.51208578647949]
We adapt a few shot domain adaptation method for deep learning based cross-domain subtomogram classification.
Our method achieves significant improvement on cross domain subtomogram classification compared with baseline methods.
arXiv Detail & Related papers (2020-07-30T12:39:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.