AIDE: Annotation-efficient deep learning for automatic medical image
segmentation
- URL: http://arxiv.org/abs/2012.04885v2
- Date: Mon, 14 Dec 2020 09:47:02 GMT
- Title: AIDE: Annotation-efficient deep learning for automatic medical image
segmentation
- Authors: Cheng Li, Rongpin Wang, Zaiyi Liu, Meiyun Wang, Hongna Tan, Yaping Wu,
Xinfeng Liu, Hui Sun, Rui Yang, Xin Liu, Ismail Ben Ayed, Hairong Zheng,
Hanchuan Peng, Shanshan Wang
- Abstract summary: We introduce effIcient Deep lEarning (AIDE) to handle imperfect datasets with an elaborately designed cross-model self-correcting mechanism.
AIDE consistently produces segmentation maps comparable to those generated by the fully supervised counterparts.
Such a 10-fold improvement of efficiency in utilizing experts' labels has the potential to promote a wide range of biomedical applications.
- Score: 22.410878684721286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate image segmentation is crucial for medical imaging applications. The
prevailing deep learning approaches typically rely on very large training
datasets with high-quality manual annotations, which are often not available in
medical imaging. We introduce Annotation-effIcient Deep lEarning (AIDE) to
handle imperfect datasets with an elaborately designed cross-model
self-correcting mechanism. AIDE improves the segmentation Dice scores of
conventional deep learning models on open datasets possessing scarce or noisy
annotations by up to 30%. For three clinical datasets containing 11,852 breast
images of 872 patients from three medical centers, AIDE consistently produces
segmentation maps comparable to those generated by the fully supervised
counterparts as well as the manual annotations of independent radiologists by
utilizing only 10% training annotations. Such a 10-fold improvement of
efficiency in utilizing experts' labels has the potential to promote a wide
range of biomedical applications.
Related papers
- Towards Unifying Anatomy Segmentation: Automated Generation of a
Full-body CT Dataset via Knowledge Aggregation and Anatomical Guidelines [113.08940153125616]
We generate a dataset of whole-body CT scans with $142$ voxel-level labels for 533 volumes providing comprehensive anatomical coverage.
Our proposed procedure does not rely on manual annotation during the label aggregation stage.
We release our trained unified anatomical segmentation model capable of predicting $142$ anatomical structures on CT data.
arXiv Detail & Related papers (2023-07-25T09:48:13Z) - PCDAL: A Perturbation Consistency-Driven Active Learning Approach for
Medical Image Segmentation and Classification [12.560273908522714]
Supervised learning deeply relies on large-scale annotated data, which is expensive, time-consuming, and impractical to acquire in medical imaging applications.
Active Learning (AL) methods have been widely applied in natural image classification tasks to reduce annotation costs.
We propose an AL-based method that can be simultaneously applied to 2D medical image classification, segmentation, and 3D medical image segmentation tasks.
arXiv Detail & Related papers (2023-06-29T13:11:46Z) - FBA-Net: Foreground and Background Aware Contrastive Learning for
Semi-Supervised Atrium Segmentation [10.11072886547561]
We propose a contrastive learning strategy of foreground and background representations for semi-supervised 3D medical image segmentation.
Our framework has the potential to advance the field of semi-supervised 3D medical image segmentation.
arXiv Detail & Related papers (2023-06-27T04:14:50Z) - Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - Semi-Supervised and Self-Supervised Collaborative Learning for Prostate
3D MR Image Segmentation [8.527048567343234]
Volumetric magnetic resonance (MR) image segmentation plays an important role in many clinical applications.
Deep learning (DL) has recently achieved state-of-the-art or even human-level performance on various image segmentation tasks.
In this work, we aim to train a semi-supervised and self-supervised collaborative learning framework for prostate 3D MR image segmentation.
arXiv Detail & Related papers (2022-11-16T11:40:13Z) - Deep Learning Based Cardiac MRI Segmentation: Do We Need Experts? [12.36854197042851]
We show that a segmentation neural network trained on non-expert groundtruth data is, to all practical purposes, as good as on expert groundtruth data.
We highlight an opportunity for the efficient and cheap creation of annotations for cardiac datasets.
arXiv Detail & Related papers (2021-07-23T20:10:58Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z) - Suggestive Annotation of Brain Tumour Images with Gradient-guided
Sampling [14.092503407739422]
We propose an efficient annotation framework for brain tumour images that is able to suggest informative sample images for human experts to annotate.
Experiments show that training a segmentation model with only 19% suggestively annotated patient scans from BraTS 2019 dataset can achieve a comparable performance to training a model on the full dataset for whole tumour segmentation task.
arXiv Detail & Related papers (2020-06-26T13:39:49Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z) - Weakly supervised multiple instance learning histopathological tumor
segmentation [51.085268272912415]
We propose a weakly supervised framework for whole slide imaging segmentation.
We exploit a multiple instance learning scheme for training models.
The proposed framework has been evaluated on multi-locations and multi-centric public data from The Cancer Genome Atlas and the PatchCamelyon dataset.
arXiv Detail & Related papers (2020-04-10T13:12:47Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.