DSAL: Deeply Supervised Active Learning from Strong and Weak Labelers
for Biomedical Image Segmentation
- URL: http://arxiv.org/abs/2101.09057v1
- Date: Fri, 22 Jan 2021 11:31:33 GMT
- Title: DSAL: Deeply Supervised Active Learning from Strong and Weak Labelers
for Biomedical Image Segmentation
- Authors: Ziyuan Zhao, Zeng Zeng, Kaixin Xu, Cen Chen, Cuntai Guan
- Abstract summary: We propose a deep active semi-supervised learning framework, DSAL, combining active learning and semi-supervised learning strategies.
In DSAL, a new criterion based on deep supervision mechanism is proposed to select informative samples with high uncertainties.
We use the proposed criteria to select samples for strong and weak labelers to produce oracle labels and pseudo labels simultaneously at each active learning iteration.
- Score: 13.707848142719424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image segmentation is one of the most essential biomedical image processing
problems for different imaging modalities, including microscopy and X-ray in
the Internet-of-Medical-Things (IoMT) domain. However, annotating biomedical
images is knowledge-driven, time-consuming, and labor-intensive, making it
difficult to obtain abundant labels with limited costs. Active learning
strategies come into ease the burden of human annotation, which queries only a
subset of training data for annotation. Despite receiving attention, most of
active learning methods generally still require huge computational costs and
utilize unlabeled data inefficiently. They also tend to ignore the intermediate
knowledge within networks. In this work, we propose a deep active
semi-supervised learning framework, DSAL, combining active learning and
semi-supervised learning strategies. In DSAL, a new criterion based on deep
supervision mechanism is proposed to select informative samples with high
uncertainties and low uncertainties for strong labelers and weak labelers
respectively. The internal criterion leverages the disagreement of intermediate
features within the deep learning network for active sample selection, which
subsequently reduces the computational costs. We use the proposed criteria to
select samples for strong and weak labelers to produce oracle labels and pseudo
labels simultaneously at each active learning iteration in an ensemble learning
manner, which can be examined with IoMT Platform. Extensive experiments on
multiple medical image datasets demonstrate the superiority of the proposed
method over state-of-the-art active learning methods.
Related papers
- Best of Both Worlds: Multimodal Contrastive Learning with Tabular and
Imaging Data [7.49320945341034]
We propose the first self-supervised contrastive learning framework to train unimodal encoders.
Our solution combines SimCLR and SCARF, two leading contrastive learning strategies.
We show the generalizability of our approach to natural images using the DVM car advertisement dataset.
arXiv Detail & Related papers (2023-03-24T15:44:42Z) - Active learning for medical image segmentation with stochastic batches [13.171801108109198]
To reduce manual labelling, active learning (AL) targets the most informative samples from the unlabelled set to annotate and add to the labelled training set.
This work aims to take advantage of the diversity and speed offered by random sampling to improve the selection of uncertainty-based AL methods for segmenting medical images.
arXiv Detail & Related papers (2023-01-18T17:25:55Z) - Deep reinforced active learning for multi-class image classification [0.0]
High accuracy medical image classification can be limited by the costs of acquiring more data as well as the time and expertise needed to label existing images.
We apply active learning to medical image classification, a method which aims to maximise model performance on a minimal subset from a larger pool of data.
arXiv Detail & Related papers (2022-06-20T09:30:55Z) - Robust Medical Image Classification from Noisy Labeled Data with Global
and Local Representation Guided Co-training [73.60883490436956]
We propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification.
We employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples.
We also design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples.
arXiv Detail & Related papers (2022-05-10T07:50:08Z) - Federated Cycling (FedCy): Semi-supervised Federated Learning of
Surgical Phases [57.90226879210227]
FedCy is a semi-supervised learning (FSSL) method that combines FL and self-supervised learning to exploit a decentralized dataset of both labeled and unlabeled videos.
We demonstrate significant performance gains over state-of-the-art FSSL methods on the task of automatic recognition of surgical phases.
arXiv Detail & Related papers (2022-03-14T17:44:53Z) - 2021 BEETL Competition: Advancing Transfer Learning for Subject
Independence & Heterogenous EEG Data Sets [89.84774119537087]
We design two transfer learning challenges around diagnostics and Brain-Computer-Interfacing (BCI)
Task 1 is centred on medical diagnostics, addressing automatic sleep stage annotation across subjects.
Task 2 is centred on Brain-Computer Interfacing (BCI), addressing motor imagery decoding across both subjects and data sets.
arXiv Detail & Related papers (2022-02-14T12:12:20Z) - Positional Contrastive Learning for Volumetric Medical Image
Segmentation [13.086140606803408]
We propose a novel positional contrastive learning framework to generate contrastive data pairs.
The proposed PCL method can substantially improve the segmentation performance compared to existing methods in both semi-supervised setting and transfer learning setting.
arXiv Detail & Related papers (2021-06-16T22:15:28Z) - Self-Ensembling Contrastive Learning for Semi-Supervised Medical Image
Segmentation [6.889911520730388]
We aim to boost the performance of semi-supervised learning for medical image segmentation with limited labels.
We learn latent representations directly at feature-level by imposing contrastive loss on unlabeled images.
We conduct experiments on an MRI and a CT segmentation dataset and demonstrate that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-05-27T03:27:58Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z) - Confident Coreset for Active Learning in Medical Image Analysis [57.436224561482966]
We propose a novel active learning method, confident coreset, which considers both uncertainty and distribution for effectively selecting informative samples.
By comparative experiments on two medical image analysis tasks, we show that our method outperforms other active learning methods.
arXiv Detail & Related papers (2020-04-05T13:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.