Generalized Multi-Task Learning from Substantially Unlabeled
Multi-Source Medical Image Data
- URL: http://arxiv.org/abs/2110.13185v1
- Date: Mon, 25 Oct 2021 18:09:19 GMT
- Title: Generalized Multi-Task Learning from Substantially Unlabeled
Multi-Source Medical Image Data
- Authors: Ayaan Haque, Abdullah-Al-Zubaer Imran, Adam Wang, Demetri Terzopoulos
- Abstract summary: MultiMix is a new multi-task learning model that jointly learns disease classification and anatomical segmentation in a semi-supervised manner.
Our experiments with varying quantities of multi-source labeled data in the training sets confirm the effectiveness of MultiMix.
- Score: 11.061381376559053
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning-based models, when trained in a fully-supervised manner, can be
effective in performing complex image analysis tasks, although contingent upon
the availability of large labeled datasets. Especially in the medical imaging
domain, however, expert image annotation is expensive, time-consuming, and
prone to variability. Semi-supervised learning from limited quantities of
labeled data has shown promise as an alternative. Maximizing knowledge gains
from copious unlabeled data benefits semi-supervised learning models. Moreover,
learning multiple tasks within the same model further improves its
generalizability. We propose MultiMix, a new multi-task learning model that
jointly learns disease classification and anatomical segmentation in a
semi-supervised manner, while preserving explainability through a novel
saliency bridge between the two tasks. Our experiments with varying quantities
of multi-source labeled data in the training sets confirm the effectiveness of
MultiMix in the simultaneous classification of pneumonia and segmentation of
the lungs in chest X-ray images. Moreover, both in-domain and cross-domain
evaluations across these tasks further showcase the potential of our model to
adapt to challenging generalization scenarios.
Related papers
- Multi-rater Prompting for Ambiguous Medical Image Segmentation [12.452584289825849]
Multi-rater annotations commonly occur when medical images are independently annotated by multiple experts (raters)
We propose a multi-rater prompt-based approach to address these two challenges altogether.
arXiv Detail & Related papers (2024-04-11T09:13:50Z) - MUSCLE: Multi-task Self-supervised Continual Learning to Pre-train Deep
Models for X-ray Images of Multiple Body Parts [63.30352394004674]
Multi-task Self-super-vised Continual Learning (MUSCLE) is a novel self-supervised pre-training pipeline for medical imaging tasks.
MUSCLE aggregates X-rays collected from multiple body parts for representation learning, and adopts a well-designed continual learning procedure.
We evaluate MUSCLE using 9 real-world X-ray datasets with various tasks, including pneumonia classification, skeletal abnormality classification, lung segmentation, and tuberculosis (TB) detection.
arXiv Detail & Related papers (2023-10-03T12:19:19Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Domain Generalization for Mammography Detection via Multi-style and
Multi-view Contrastive Learning [47.30824944649112]
A new contrastive learning scheme is developed to augment the generalization capability of deep learning model to various vendors with limited resources.
The backbone network is trained with a multi-style and multi-view unsupervised self-learning scheme for the embedding of invariant features to various vendor-styles.
The experimental results suggest that our approach can effectively improve detection performance on both seen and unseen domains.
arXiv Detail & Related papers (2021-11-21T14:29:50Z) - Relational Subsets Knowledge Distillation for Long-tailed Retinal
Diseases Recognition [65.77962788209103]
We propose class subset learning by dividing the long-tailed data into multiple class subsets according to prior knowledge.
It enforces the model to focus on learning the subset-specific knowledge.
The proposed framework proved to be effective for the long-tailed retinal diseases recognition task.
arXiv Detail & Related papers (2021-04-22T13:39:33Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - MultiMix: Sparingly Supervised, Extreme Multitask Learning From Medical
Images [13.690075845927606]
We propose a novel multitask learning model, namely MultiMix, which jointly learns disease classification and anatomical segmentation in a sparingly supervised manner.
Our experiments justify the effectiveness of our multitasking model for the classification of pneumonia and segmentation of lungs from chest X-ray images.
arXiv Detail & Related papers (2020-10-28T03:47:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.