Multi-task Semi-supervised Learning for Pulmonary Lobe Segmentation
- URL: http://arxiv.org/abs/2104.11017v1
- Date: Thu, 22 Apr 2021 12:33:30 GMT
- Title: Multi-task Semi-supervised Learning for Pulmonary Lobe Segmentation
- Authors: Jingnan Jia, Zhiwei Zhai, M. Els Bakker, I. Hernandez Giron, Marius
Staring, Berend C. Stoel
- Abstract summary: Pulmonary lobe segmentation is an important preprocessing task for the analysis of lung diseases.
Deep learning based methods can outperform these traditional approaches.
Deep multi-task learning is expected to utilize labels of multiple different structures.
- Score: 2.8016091833446617
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Pulmonary lobe segmentation is an important preprocessing task for the
analysis of lung diseases. Traditional methods relying on fissure detection or
other anatomical features, such as the distribution of pulmonary vessels and
airways, could provide reasonably accurate lobe segmentations. Deep learning
based methods can outperform these traditional approaches, but require large
datasets. Deep multi-task learning is expected to utilize labels of multiple
different structures. However, commonly such labels are distributed over
multiple datasets. In this paper, we proposed a multi-task semi-supervised
model that can leverage information of multiple structures from unannotated
datasets and datasets annotated with different structures. A focused
alternating training strategy is presented to balance the different tasks. We
evaluated the trained model on an external independent CT dataset. The results
show that our model significantly outperforms single-task alternatives,
improving the mean surface distance from 7.174 mm to 4.196 mm. We also
demonstrated that our approach is successful for different network
architectures as backbones.
Related papers
- Automated Label Unification for Multi-Dataset Semantic Segmentation with GNNs [48.406728896785296]
We propose a novel approach to automatically construct a unified label space across multiple datasets using graph neural networks.
Unlike existing methods, our approach facilitates seamless training without the need for additional manual reannotation or taxonomy reconciliation.
arXiv Detail & Related papers (2024-07-15T08:42:10Z) - MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining [73.81862342673894]
Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks.
transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks.
We conduct multi-task supervised pretraining on the SAMRS dataset, encompassing semantic segmentation, instance segmentation, and rotated object detection.
Our models are finetuned on various RS downstream tasks, such as scene classification, horizontal and rotated object detection, semantic segmentation, and change detection.
arXiv Detail & Related papers (2024-03-20T09:17:22Z) - Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - MultiTalent: A Multi-Dataset Approach to Medical Image Segmentation [1.146419670457951]
Current practices limit model training and supervised pre-training to one or a few similar datasets.
We propose MultiTalent, a method that leverages multiple CT datasets with diverse and conflicting class definitions.
arXiv Detail & Related papers (2023-03-25T11:37:16Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - One Model is All You Need: Multi-Task Learning Enables Simultaneous
Histology Image Segmentation and Classification [3.8725005247905386]
We present a multi-task learning approach for segmentation and classification of tissue regions.
We enable simultaneous prediction with a single network.
As a result of feature sharing, we also show that the learned representation can be used to improve downstream tasks.
arXiv Detail & Related papers (2022-02-28T20:22:39Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - OmiEmbed: reconstruct comprehensive phenotypic information from
multi-omics data using multi-task deep learning [19.889861433855053]
High-dimensional omics data contains intrinsic biomedical information crucial for personalised medicine.
It is challenging to capture them from genome-wide data due to the large number of molecular features and small number of available samples.
We proposed a unified multi-task deep learning framework called OmiEmbed to capture a holistic and relatively precise profile of phenotype from high-dimensional omics data.
arXiv Detail & Related papers (2021-02-03T07:34:29Z) - MultiMix: Sparingly Supervised, Extreme Multitask Learning From Medical
Images [13.690075845927606]
We propose a novel multitask learning model, namely MultiMix, which jointly learns disease classification and anatomical segmentation in a sparingly supervised manner.
Our experiments justify the effectiveness of our multitasking model for the classification of pneumonia and segmentation of lungs from chest X-ray images.
arXiv Detail & Related papers (2020-10-28T03:47:29Z) - Label-Efficient Multi-Task Segmentation using Contrastive Learning [0.966840768820136]
We propose a multi-task segmentation model with a contrastive learning based subtask and compare its performance with other multi-task models.
We experimentally show that our proposed method outperforms other multi-task methods including the state-of-the-art fully supervised model when the amount of annotated data is limited.
arXiv Detail & Related papers (2020-09-23T14:12:17Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.