Going to Extremes: Weakly Supervised Medical Image Segmentation
- URL: http://arxiv.org/abs/2009.11988v1
- Date: Fri, 25 Sep 2020 00:28:10 GMT
- Title: Going to Extremes: Weakly Supervised Medical Image Segmentation
- Authors: Holger R Roth, Dong Yang, Ziyue Xu, Xiaosong Wang, Daguang Xu
- Abstract summary: We suggest using minimal user interaction in the form of extreme point clicks to train a segmentation model.
An initial segmentation is generated based on the extreme points utilizing the random walker algorithm.
This initial segmentation is then used as a noisy supervision signal to train a fully convolutional network.
- Score: 12.700841704699615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical image annotation is a major hurdle for developing precise and robust
machine learning models. Annotation is expensive, time-consuming, and often
requires expert knowledge, particularly in the medical field. Here, we suggest
using minimal user interaction in the form of extreme point clicks to train a
segmentation model which, in effect, can be used to speed up medical image
annotation. An initial segmentation is generated based on the extreme points
utilizing the random walker algorithm. This initial segmentation is then used
as a noisy supervision signal to train a fully convolutional network that can
segment the organ of interest, based on the provided user clicks. Through
experimentation on several medical imaging datasets, we show that the
predictions of the network can be refined using several rounds of training with
the prediction from the same weakly annotated data. Further improvements are
shown utilizing the clicked points within a custom-designed loss and attention
mechanism. Our approach has the potential to speed up the process of generating
new training datasets for the development of new machine learning and deep
learning-based models for, but not exclusively, medical image analysis.
Related papers
- Enhancing Weakly Supervised 3D Medical Image Segmentation through
Probabilistic-aware Learning [52.249748801637196]
3D medical image segmentation is a challenging task with crucial implications for disease diagnosis and treatment planning.
Recent advances in deep learning have significantly enhanced fully supervised medical image segmentation.
We propose a novel probabilistic-aware weakly supervised learning pipeline, specifically designed for 3D medical imaging.
arXiv Detail & Related papers (2024-03-05T00:46:53Z) - Self-Supervised Pre-Training with Contrastive and Masked Autoencoder
Methods for Dealing with Small Datasets in Deep Learning for Medical Imaging [8.34398674359296]
Deep learning in medical imaging has the potential to minimize the risk of diagnostic errors, reduce radiologist workload, and accelerate diagnosis.
Training such deep learning models requires large and accurate datasets, with annotations for all training samples.
To address this challenge, deep learning models can be pre-trained on large image datasets without annotations using methods from the field of self-supervised learning.
arXiv Detail & Related papers (2023-08-12T11:31:01Z) - RadTex: Learning Efficient Radiograph Representations from Text Reports [7.090896766922791]
We build a data-efficient learning framework that utilizes radiology reports to improve medical image classification performance with limited labeled data.
Our model achieves higher classification performance than ImageNet-supervised pretraining when labeled training data is limited.
arXiv Detail & Related papers (2022-08-05T15:06:26Z) - Self-Supervised-RCNN for Medical Image Segmentation with Limited Data
Annotation [0.16490701092527607]
We propose an alternative deep learning training strategy based on self-supervised pretraining on unlabeled MRI scans.
Our pretraining approach first, randomly applies different distortions to random areas of unlabeled images and then predicts the type of distortions and loss of information.
The effectiveness of the proposed method for segmentation tasks in different pre-training and fine-tuning scenarios is evaluated.
arXiv Detail & Related papers (2022-07-17T13:28:52Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z) - Suggestive Annotation of Brain Tumour Images with Gradient-guided
Sampling [14.092503407739422]
We propose an efficient annotation framework for brain tumour images that is able to suggest informative sample images for human experts to annotate.
Experiments show that training a segmentation model with only 19% suggestively annotated patient scans from BraTS 2019 dataset can achieve a comparable performance to training a model on the full dataset for whole tumour segmentation task.
arXiv Detail & Related papers (2020-06-26T13:39:49Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.