Confident Coreset for Active Learning in Medical Image Analysis
- URL: http://arxiv.org/abs/2004.02200v1
- Date: Sun, 5 Apr 2020 13:46:16 GMT
- Title: Confident Coreset for Active Learning in Medical Image Analysis
- Authors: Seong Tae Kim, Farrukh Mushtaq, Nassir Navab
- Abstract summary: We propose a novel active learning method, confident coreset, which considers both uncertainty and distribution for effectively selecting informative samples.
By comparative experiments on two medical image analysis tasks, we show that our method outperforms other active learning methods.
- Score: 57.436224561482966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in deep learning have resulted in great successes in various
applications. Although semi-supervised or unsupervised learning methods have
been widely investigated, the performance of deep neural networks highly
depends on the annotated data. The problem is that the budget for annotation is
usually limited due to the annotation time and expensive annotation cost in
medical data. Active learning is one of the solutions to this problem where an
active learner is designed to indicate which samples need to be annotated to
effectively train a target model. In this paper, we propose a novel active
learning method, confident coreset, which considers both uncertainty and
distribution for effectively selecting informative samples. By comparative
experiments on two medical image analysis tasks, we show that our method
outperforms other active learning methods.
Related papers
- Zero-shot Active Learning Using Self Supervised Learning [11.28415437676582]
We propose a new Active Learning approach which is model agnostic as well as one doesn't require an iterative process.
We aim to leverage self-supervised learnt features for the task of Active Learning.
arXiv Detail & Related papers (2024-01-03T11:49:07Z) - A comprehensive survey on deep active learning in medical image analysis [23.849628978883707]
Deep learning has achieved widespread success in medical image analysis, leading to an increasing demand for large-scale expert-annotated medical image datasets.
Yet, the high cost of annotating medical images severely hampers the development of deep learning in this field.
To reduce annotation costs, active learning aims to select the most informative samples for annotation and train high-performance models with as few labeled samples as possible.
arXiv Detail & Related papers (2023-10-22T08:46:40Z) - Data Efficient Contrastive Learning in Histopathology using Active Sampling [0.0]
Deep learning algorithms can provide robust quantitative analysis in digital pathology.
These algorithms require large amounts of annotated training data.
Self-supervised methods have been proposed to learn features using ad-hoc pretext tasks.
We propose a new method for actively sampling informative members from the training set using a small proxy network.
arXiv Detail & Related papers (2023-03-28T18:51:22Z) - TAAL: Test-time Augmentation for Active Learning in Medical Image
Segmentation [7.856339385917824]
This paper proposes Test-time Augmentation for Active Learning (TAAL), a novel semi-supervised active learning approach for segmentation.
Our results on a publicly-available dataset of cardiac images show that TAAL outperforms existing baseline methods in both fully-supervised and semi-supervised settings.
arXiv Detail & Related papers (2023-01-16T22:19:41Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Low-Regret Active learning [64.36270166907788]
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training.
At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on predictable (easy) instances.
arXiv Detail & Related papers (2021-04-06T22:53:45Z) - Few-Cost Salient Object Detection with Adversarial-Paced Learning [95.0220555274653]
This paper proposes to learn the effective salient object detection model based on the manual annotation on a few training images only.
We name this task as the few-cost salient object detection and propose an adversarial-paced learning (APL)-based framework to facilitate the few-cost learning scenario.
arXiv Detail & Related papers (2021-04-05T14:15:49Z) - Efficacy of Bayesian Neural Networks in Active Learning [11.609770399591516]
We show that Bayesian neural networks are more efficient than ensemble based techniques in capturing uncertainty.
Our findings also reveal some key drawbacks of the ensemble techniques, which was recently shown to be more effective than Monte Carlo dropouts.
arXiv Detail & Related papers (2021-04-02T06:02:11Z) - Bayesian active learning for production, a systematic study and a
reusable library [85.32971950095742]
In this paper, we analyse the main drawbacks of current active learning techniques.
We do a systematic study on the effects of the most common issues of real-world datasets on the deep active learning process.
We derive two techniques that can speed up the active learning loop such as partial uncertainty sampling and larger query size.
arXiv Detail & Related papers (2020-06-17T14:51:11Z) - LRTD: Long-Range Temporal Dependency based Active Learning for Surgical
Workflow Recognition [67.86810761677403]
We propose a novel active learning method for cost-effective surgical video analysis.
Specifically, we propose a non-local recurrent convolutional network (NL-RCNet), which introduces non-local block to capture the long-range temporal dependency.
We validate our approach on a large surgical video dataset (Cholec80) by performing surgical workflow recognition task.
arXiv Detail & Related papers (2020-04-21T09:21:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.