Suggestive Annotation of Brain MR Images with Gradient-guided Sampling
- URL: http://arxiv.org/abs/2206.01014v1
- Date: Thu, 2 Jun 2022 12:23:44 GMT
- Title: Suggestive Annotation of Brain MR Images with Gradient-guided Sampling
- Authors: Chengliang Dai, Shuo Wang, Yuanhan Mo, Elsa Angelini, Yike Guo, Wenjia
Bai
- Abstract summary: We propose an efficient annotation framework for brain MR images that can suggest informative sample images for human experts to annotate.
We evaluate the framework on two different brain image analysis tasks, namely brain tumour segmentation and whole brain segmentation.
The proposed framework demonstrates a promising way to save manual annotation cost and improve data efficiency in medical imaging applications.
- Score: 12.928940875474378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning has been widely adopted for medical image analysis in recent
years given its promising performance in image segmentation and classification
tasks. The success of machine learning, in particular supervised learning,
depends on the availability of manually annotated datasets. For medical imaging
applications, such annotated datasets are not easy to acquire, it takes a
substantial amount of time and resource to curate an annotated medical image
set. In this paper, we propose an efficient annotation framework for brain MR
images that can suggest informative sample images for human experts to
annotate. We evaluate the framework on two different brain image analysis
tasks, namely brain tumour segmentation and whole brain segmentation.
Experiments show that for brain tumour segmentation task on the BraTS 2019
dataset, training a segmentation model with only 7% suggestively annotated
image samples can achieve a performance comparable to that of training on the
full dataset. For whole brain segmentation on the MALC dataset, training with
42% suggestively annotated image samples can achieve a comparable performance
to training on the full dataset. The proposed framework demonstrates a
promising way to save manual annotation cost and improve data efficiency in
medical imaging applications.
Related papers
- LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Semi-Supervised Image Captioning by Adversarially Propagating Labeled
Data [95.0476489266988]
We present a novel data-efficient semi-supervised framework to improve the generalization of image captioning models.
Our proposed method trains a captioner to learn from a paired data and to progressively associate unpaired data.
Our extensive and comprehensive empirical results both on (1) image-based and (2) dense region-based captioning datasets followed by comprehensive analysis on the scarcely-paired dataset.
arXiv Detail & Related papers (2023-01-26T15:25:43Z) - Semi-Supervised and Self-Supervised Collaborative Learning for Prostate
3D MR Image Segmentation [8.527048567343234]
Volumetric magnetic resonance (MR) image segmentation plays an important role in many clinical applications.
Deep learning (DL) has recently achieved state-of-the-art or even human-level performance on various image segmentation tasks.
In this work, we aim to train a semi-supervised and self-supervised collaborative learning framework for prostate 3D MR image segmentation.
arXiv Detail & Related papers (2022-11-16T11:40:13Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Exemplar Learning for Medical Image Segmentation [38.61378161105941]
We propose an Exemplar Learning-based Synthesis Net (ELSNet) framework for medical image segmentation.
ELSNet introduces two new modules for image segmentation: an exemplar-guided synthesis module and a pixel-prototype based contrastive embedding module.
We conduct experiments on several organ segmentation datasets and present an in-depth analysis.
arXiv Detail & Related papers (2022-04-03T00:10:06Z) - Self-Paced Contrastive Learning for Semi-supervisedMedical Image
Segmentation with Meta-labels [6.349708371894538]
We propose to adapt contrastive learning to work with meta-label annotations.
We use the meta-labels for pre-training the image encoder as well as to regularize a semi-supervised training.
Results on three different medical image segmentation datasets show that our approach highly boosts the performance of a model trained on a few scans.
arXiv Detail & Related papers (2021-07-29T04:30:46Z) - Cascaded Robust Learning at Imperfect Labels for Chest X-ray
Segmentation [61.09321488002978]
We present a novel cascaded robust learning framework for chest X-ray segmentation with imperfect annotation.
Our model consists of three independent network, which can effectively learn useful information from the peer networks.
Our methods could achieve a significant improvement on the accuracy in segmentation tasks compared to the previous methods.
arXiv Detail & Related papers (2021-04-05T15:50:16Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z) - Suggestive Annotation of Brain Tumour Images with Gradient-guided
Sampling [14.092503407739422]
We propose an efficient annotation framework for brain tumour images that is able to suggest informative sample images for human experts to annotate.
Experiments show that training a segmentation model with only 19% suggestively annotated patient scans from BraTS 2019 dataset can achieve a comparable performance to training a model on the full dataset for whole tumour segmentation task.
arXiv Detail & Related papers (2020-06-26T13:39:49Z) - Unified Representation Learning for Efficient Medical Image Analysis [0.623075162128532]
We propose a multi-task training approach for medical image analysis using a unified modality-specific feature representation (UMS-Rep)
Our results demonstrate that the proposed approach reduces the overall demand for computational resources and improves target task generalization and performance.
arXiv Detail & Related papers (2020-06-19T16:52:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.