Weakly Supervised Learning Significantly Reduces the Number of Labels
Required for Intracranial Hemorrhage Detection on Head CT
- URL: http://arxiv.org/abs/2211.15924v1
- Date: Tue, 29 Nov 2022 04:42:41 GMT
- Title: Weakly Supervised Learning Significantly Reduces the Number of Labels
Required for Intracranial Hemorrhage Detection on Head CT
- Authors: Jacopo Teneggi, Paul H. Yi, Jeremias Sulam
- Abstract summary: Machine learning pipelines, in particular those based on deep learning (DL) models, require large amounts of labeled data.
This work studies the question of what kind of labels should be collected for the problem of intracranial hemorrhage detection in brain CT.
We find that strong supervision (i.e., learning with local image-level annotations) and weak supervision (i.e., learning with only global examination-level labels) achieve comparable performance.
- Score: 7.713240800142863
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern machine learning pipelines, in particular those based on deep learning
(DL) models, require large amounts of labeled data. For classification
problems, the most common learning paradigm consists of presenting labeled
examples during training, thus providing strong supervision on what constitutes
positive and negative samples. This constitutes a major obstacle for the
development of DL models in radiology--in particular for cross-sectional
imaging (e.g., computed tomography [CT] scans)--where labels must come from
manual annotations by expert radiologists at the image or slice-level. These
differ from examination-level annotations, which are coarser but cheaper, and
could be extracted from radiology reports using natural language processing
techniques. This work studies the question of what kind of labels should be
collected for the problem of intracranial hemorrhage detection in brain CT. We
investigate whether image-level annotations should be preferred to
examination-level ones. By framing this task as a multiple instance learning
problem, and employing modern attention-based DL architectures, we analyze the
degree to which different levels of supervision improve detection performance.
We find that strong supervision (i.e., learning with local image-level
annotations) and weak supervision (i.e., learning with only global
examination-level labels) achieve comparable performance in examination-level
hemorrhage detection (the task of selecting the images in an examination that
show signs of hemorrhage) as well as in image-level hemorrhage detection
(highlighting those signs within the selected images). Furthermore, we study
this behavior as a function of the number of labels available during training.
Our results suggest that local labels may not be necessary at all for these
tasks, drastically reducing the time and cost involved in collecting and
curating datasets.
Related papers
- Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Mixed-UNet: Refined Class Activation Mapping for Weakly-Supervised
Semantic Segmentation with Multi-scale Inference [28.409679398886304]
We develop a novel model named Mixed-UNet, which has two parallel branches in the decoding phase.
We evaluate the designed Mixed-UNet against several prevalent deep learning-based segmentation approaches on our dataset collected from the local hospital and public datasets.
arXiv Detail & Related papers (2022-05-06T08:37:02Z) - A Histopathology Study Comparing Contrastive Semi-Supervised and Fully
Supervised Learning [0.0]
We explore self-supervised learning to reduce labeling burdens in computational pathology.
We find that ImageNet pre-trained networks largely outperform the self-supervised representations obtained using Barlow Twins.
arXiv Detail & Related papers (2021-11-10T19:04:08Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Label Cleaning Multiple Instance Learning: Refining Coarse Annotations
on Single Whole-Slide Images [83.7047542725469]
Annotating cancerous regions in whole-slide images (WSIs) of pathology samples plays a critical role in clinical diagnosis, biomedical research, and machine learning algorithms development.
We present a method, named Label Cleaning Multiple Instance Learning (LC-MIL), to refine coarse annotations on a single WSI without the need of external training data.
Our experiments on a heterogeneous WSI set with breast cancer lymph node metastasis, liver cancer, and colorectal cancer samples show that LC-MIL significantly refines the coarse annotations, outperforming the state-of-the-art alternatives, even while learning from a single slide.
arXiv Detail & Related papers (2021-09-22T15:06:06Z) - Anomaly Detection in Medical Imaging -- A Mini Review [0.8122270502556374]
This paper uses a semi-exhaustive literature review of relevant anomaly detection papers in medical imaging to cluster into applications.
The main results showed that the current research is mostly motivated by reducing the need for labelled data.
Also, the successful and substantial amount of research in the brain MRI domain shows the potential for applications in further domains like OCT and chest X-ray.
arXiv Detail & Related papers (2021-08-25T11:45:40Z) - Self-Ensembling Contrastive Learning for Semi-Supervised Medical Image
Segmentation [6.889911520730388]
We aim to boost the performance of semi-supervised learning for medical image segmentation with limited labels.
We learn latent representations directly at feature-level by imposing contrastive loss on unlabeled images.
We conduct experiments on an MRI and a CT segmentation dataset and demonstrate that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-05-27T03:27:58Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - A Teacher-Student Framework for Semi-supervised Medical Image
Segmentation From Mixed Supervision [62.4773770041279]
We develop a semi-supervised learning framework based on a teacher-student fashion for organ and lesion segmentation.
We show our model is robust to the quality of bounding box and achieves comparable performance compared with full-supervised learning methods.
arXiv Detail & Related papers (2020-10-23T07:58:20Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.