Train, Learn, Expand, Repeat
- URL: http://arxiv.org/abs/2003.08469v2
- Date: Sun, 19 Apr 2020 12:25:11 GMT
- Title: Train, Learn, Expand, Repeat
- Authors: Abhijeet Parida, Aadhithya Sankar, Rami Eisawy, Tom Finck, Benedikt
Wiestler, Franz Pfister, Julia Moosbauer
- Abstract summary: High-quality labeled data is essential to successfully train supervised machine learning models.
Medical professionals who can expertly label the data are a scarce and expensive resource.
We apply this technique on the segmentation of intracranial hemorrhage (ICH) in CT scans of the brain.
- Score: 0.15833270109954134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-quality labeled data is essential to successfully train supervised
machine learning models. Although a large amount of unlabeled data is present
in the medical domain, labeling poses a major challenge: medical professionals
who can expertly label the data are a scarce and expensive resource. Making
matters worse, voxel-wise delineation of data (e.g. for segmentation tasks) is
tedious and suffers from high inter-rater variance, thus dramatically limiting
available training data. We propose a recursive training strategy to perform
the task of semantic segmentation given only very few training samples with
pixel-level annotations. We expand on this small training set having cheaper
image-level annotations using a recursive training strategy. We apply this
technique on the segmentation of intracranial hemorrhage (ICH) in CT (computed
tomography) scans of the brain, where typically few annotated data is
available.
Related papers
- Promptable cancer segmentation using minimal expert-curated data [5.097733221827974]
Automated segmentation of cancer on medical images can aid targeted diagnostic and therapeutic procedures.<n>Its adoption is limited by the high cost of expert annotations required for training and inter-observer variability in datasets.<n>We propose a novel approach for promptable segmentation requiring only 24 fully-segmented images, supplemented by 8 weakly-labelled images.
arXiv Detail & Related papers (2025-05-23T13:56:40Z) - Guidelines for Cerebrovascular Segmentation: Managing Imperfect Annotations in the context of Semi-Supervised Learning [3.231698506153459]
Supervised learning methods achieve excellent performances when fed with a sufficient amount of labeled data.
Such labels are typically highly time-consuming, error-prone and expensive to produce.
Semi-supervised learning approaches leverage both labeled and unlabeled data, and are very useful when only a small fraction of the dataset is labeled.
arXiv Detail & Related papers (2024-04-02T09:31:06Z) - Self-Supervised Pre-Training with Contrastive and Masked Autoencoder
Methods for Dealing with Small Datasets in Deep Learning for Medical Imaging [8.34398674359296]
Deep learning in medical imaging has the potential to minimize the risk of diagnostic errors, reduce radiologist workload, and accelerate diagnosis.
Training such deep learning models requires large and accurate datasets, with annotations for all training samples.
To address this challenge, deep learning models can be pre-trained on large image datasets without annotations using methods from the field of self-supervised learning.
arXiv Detail & Related papers (2023-08-12T11:31:01Z) - Explainable Semantic Medical Image Segmentation with Style [7.074258860680265]
We propose a fully supervised generative framework that can achieve generalisable segmentation with only limited labelled data.
The proposed approach creates medical image style paired with a segmentation task driven discriminator incorporating end-to-end adversarial training.
Experiments on a fully semantic, publicly available pelvis dataset demonstrated that our method is more generalisable to shifts than other state-of-the-art methods.
arXiv Detail & Related papers (2023-03-10T04:34:51Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z) - Improving Semantic Segmentation via Self-Training [75.07114899941095]
We show that we can obtain state-of-the-art results using a semi-supervised approach, specifically a self-training paradigm.
We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data.
Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets.
arXiv Detail & Related papers (2020-04-30T17:09:17Z) - Semi-supervised few-shot learning for medical image segmentation [21.349705243254423]
Recent attempts to alleviate the need for large annotated datasets have developed training strategies under the few-shot learning paradigm.
We propose a novel few-shot learning framework for semantic segmentation, where unlabeled images are also made available at each episode.
We show that including unlabeled surrogate tasks in the episodic training leads to more powerful feature representations.
arXiv Detail & Related papers (2020-03-18T20:37:18Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.