CLINICAL: Targeted Active Learning for Imbalanced Medical Image
Classification
- URL: http://arxiv.org/abs/2210.01520v1
- Date: Tue, 4 Oct 2022 10:57:05 GMT
- Title: CLINICAL: Targeted Active Learning for Imbalanced Medical Image
Classification
- Authors: Suraj Kothawade, Atharv Savarkar, Venkat Iyer, Lakshman Tamil, Ganesh
Ramakrishnan, Rishabh Iyer
- Abstract summary: It is often the case that a suboptimal performance is obtained on some classes due to the natural class imbalance issue that comes with medical data.
We propose Clinical, a framework that uses submodular mutual information functions as acquisition functions to mine critical data points from rare classes.
We show that Clinical outperforms the state-of-the-art active learning methods by acquiring a diverse set of data points that belong to the rare classes.
- Score: 12.576168993188315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training deep learning models on medical datasets that perform well for all
classes is a challenging task. It is often the case that a suboptimal
performance is obtained on some classes due to the natural class imbalance
issue that comes with medical data. An effective way to tackle this problem is
by using targeted active learning, where we iteratively add data points to the
training data that belong to the rare classes. However, existing active
learning methods are ineffective in targeting rare classes in medical datasets.
In this work, we propose Clinical (targeted aCtive Learning for ImbalaNced
medICal imAge cLassification) a framework that uses submodular mutual
information functions as acquisition functions to mine critical data points
from rare classes. We apply our framework to a wide-array of medical imaging
datasets on a variety of real-world class imbalance scenarios - namely, binary
imbalance and long-tail imbalance. We show that Clinical outperforms the
state-of-the-art active learning methods by acquiring a diverse set of data
points that belong to the rare classes.
Related papers
- Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - Understanding the Tricks of Deep Learning in Medical Image Segmentation:
Challenges and Future Directions [66.40971096248946]
In this paper, we collect a series of MedISeg tricks for different model implementation phases.
We experimentally explore the effectiveness of these tricks on consistent baselines.
We also open-sourced a strong MedISeg repository, where each component has the advantage of plug-and-play.
arXiv Detail & Related papers (2022-09-21T12:30:05Z) - Active Data Discovery: Mining Unknown Data using Submodular Information
Measures [1.7491858164568674]
We provide an active data discovery framework which can mine unknown data slices and classes efficiently.
We show significant accuracy and labeling efficiency gains with our approach compared to existing state-of-the-art active learning approaches.
arXiv Detail & Related papers (2022-06-17T05:52:18Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - Relational Subsets Knowledge Distillation for Long-tailed Retinal
Diseases Recognition [65.77962788209103]
We propose class subset learning by dividing the long-tailed data into multiple class subsets according to prior knowledge.
It enforces the model to focus on learning the subset-specific knowledge.
The proposed framework proved to be effective for the long-tailed retinal diseases recognition task.
arXiv Detail & Related papers (2021-04-22T13:39:33Z) - SSLM: Self-Supervised Learning for Medical Diagnosis from MR Video [19.5917119072985]
In this paper, we propose a self-supervised learning approach to learn the spatial anatomical representations from magnetic resonance (MR) video clips.
The proposed pretext model learns meaningful spatial context-invariant representations.
Different experiments show that the features learnt by the pretext model provide explainable performance in the downstream task.
arXiv Detail & Related papers (2021-04-21T12:01:49Z) - Diminishing Uncertainty within the Training Pool: Active Learning for
Medical Image Segmentation [6.3858225352615285]
We explore active learning for the task of segmentation of medical imaging data sets.
We propose three new strategies for active learning: increasing frequency of uncertain data to bias the training data set, using mutual information among the input images as a regularizer and adaptation of Dice log-likelihood for Stein variational gradient descent (SVGD)
The results indicate an improvement in terms of data reduction by achieving full accuracy while only using 22.69 % and 48.85 % of the available data for each dataset, respectively.
arXiv Detail & Related papers (2021-01-07T01:55:48Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.