Label Dropout: Improved Deep Learning Echocardiography Segmentation
Using Multiple Datasets With Domain Shift and Partial Labelling
- URL: http://arxiv.org/abs/2403.07818v1
- Date: Tue, 12 Mar 2024 16:57:56 GMT
- Title: Label Dropout: Improved Deep Learning Echocardiography Segmentation
Using Multiple Datasets With Domain Shift and Partial Labelling
- Authors: Iman Islam (1), Esther Puyol-Ant\'on (1), Bram Ruijsink (1), Andrew J.
Reader (1), Andrew P. King (1) ((1) King's College London)
- Abstract summary: We propose a novel label dropout scheme to break the link between domain characteristics and the presence or absence of labels.
We demonstrate that label dropout improves echo segmentation Dice score by 62% and 25% on two cardiac structures when training using multiple diverse partially labelled datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Echocardiography (echo) is the first imaging modality used when assessing
cardiac function. The measurement of functional biomarkers from echo relies
upon the segmentation of cardiac structures and deep learning models have been
proposed to automate the segmentation process. However, in order to translate
these tools to widespread clinical use it is important that the segmentation
models are robust to a wide variety of images (e.g. acquired from different
scanners, by operators with different levels of expertise etc.). To achieve
this level of robustness it is necessary that the models are trained with
multiple diverse datasets. A significant challenge faced when training with
multiple diverse datasets is the variation in label presence, i.e. the combined
data are often partially-labelled. Adaptations of the cross entropy loss
function have been proposed to deal with partially labelled data. In this paper
we show that training naively with such a loss function and multiple diverse
datasets can lead to a form of shortcut learning, where the model associates
label presence with domain characteristics, leading to a drop in performance.
To address this problem, we propose a novel label dropout scheme to break the
link between domain characteristics and the presence or absence of labels. We
demonstrate that label dropout improves echo segmentation Dice score by 62% and
25% on two cardiac structures when training using multiple diverse partially
labelled datasets.
Related papers
- Deep Mutual Learning among Partially Labeled Datasets for Multi-Organ Segmentation [9.240202592825735]
This paper proposes a two-stage multi-organ segmentation method based on mutual learning.
In the first stage, each partial-organ segmentation model utilizes the non-overlapping organ labels from different datasets.
In the second stage, each full-organ segmentation model is supervised by fully labeled datasets with pseudo labels.
arXiv Detail & Related papers (2024-07-17T14:41:25Z) - Unsupervised Segmentation of Fetal Brain MRI using Deep Learning
Cascaded Registration [2.494736313545503]
Traditional deep learning-based automatic segmentation requires extensive training data with ground-truth labels.
We propose a novel method based on multi-atlas segmentation, that accurately segments multiple tissues without relying on labeled data for training.
Our method employs a cascaded deep learning network for 3D image registration, which computes small, incremental deformations to the moving image to align it precisely with the fixed image.
arXiv Detail & Related papers (2023-07-07T13:17:12Z) - COSST: Multi-organ Segmentation with Partially Labeled Datasets Using
Comprehensive Supervisions and Self-training [15.639976408273784]
Deep learning models have demonstrated remarkable success in multi-organ segmentation but typically require large-scale datasets with all organs of interest annotated.
It is crucial to investigate how to learn a unified model on the available partially labeled datasets to leverage their synergistic potential.
We propose a novel two-stage framework termed COSST, which effectively and efficiently integrates comprehensive supervision signals with self-training.
arXiv Detail & Related papers (2023-04-27T08:55:34Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Learning Semantic Segmentation from Multiple Datasets with Label Shifts [101.24334184653355]
This paper proposes UniSeg, an effective approach to automatically train models across multiple datasets with differing label spaces.
Specifically, we propose two losses that account for conflicting and co-occurring labels to achieve better generalization performance in unseen domains.
arXiv Detail & Related papers (2022-02-28T18:55:19Z) - Learning from Partially Overlapping Labels: Image Segmentation under
Annotation Shift [68.6874404805223]
We propose several strategies for learning from partially overlapping labels in the context of abdominal organ segmentation.
We find that combining a semi-supervised approach with an adaptive cross entropy loss can successfully exploit heterogeneously annotated data.
arXiv Detail & Related papers (2021-07-13T09:22:24Z) - A Closer Look at Self-training for Zero-Label Semantic Segmentation [53.4488444382874]
Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
arXiv Detail & Related papers (2021-04-21T14:34:33Z) - Learning Image Labels On-the-fly for Training Robust Classification
Models [13.669654965671604]
We show how noisy annotations (e.g., from different algorithm-based labelers) can be utilized together and mutually benefit the learning of classification tasks.
A meta-training based label-sampling module is designed to attend the labels that benefit the model learning the most through additional back-propagation processes.
arXiv Detail & Related papers (2020-09-22T05:38:44Z) - ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
Medical Image Segmentation [99.90263375737362]
We propose ATSO, an asynchronous version of teacher-student optimization.
ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset.
We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings.
arXiv Detail & Related papers (2020-06-24T04:05:12Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.