Semi-supervised learning for medical image classification using
imbalanced training data
- URL: http://arxiv.org/abs/2108.08956v1
- Date: Fri, 20 Aug 2021 01:06:42 GMT
- Title: Semi-supervised learning for medical image classification using
imbalanced training data
- Authors: Tri Huynh, Aiden Nibali and Zhen He
- Abstract summary: We propose Adaptive Blended Consistency Loss (ABCL) as a drop-in replacement for consistency loss in perturbation-based SSL methods.
ABCL counteracts data skew by adaptively mixing the target class distribution of the consistency loss in accordance with class frequency.
Our experiments with ABCL reveal improvements to unweighted average recall on two different imbalanced medical image classification datasets.
- Score: 11.87832944550453
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Medical image classification is often challenging for two reasons: a lack of
labelled examples due to expensive and time-consuming annotation protocols, and
imbalanced class labels due to the relative scarcity of disease-positive
individuals in the wider population. Semi-supervised learning (SSL) methods
exist for dealing with a lack of labels, but they generally do not address the
problem of class imbalance. In this study we propose Adaptive Blended
Consistency Loss (ABCL), a drop-in replacement for consistency loss in
perturbation-based SSL methods. ABCL counteracts data skew by adaptively mixing
the target class distribution of the consistency loss in accordance with class
frequency. Our experiments with ABCL reveal improvements to unweighted average
recall on two different imbalanced medical image classification datasets when
compared with existing consistency losses that are not designed to counteract
class imbalance.
Related papers
- Addressing Imbalance for Class Incremental Learning in Medical Image Classification [14.242875524728495]
We introduce two plug-in methods to mitigate the adverse effects of imbalance.
First, we propose a CIL-balanced classification loss to mitigate the classification bias toward majority classes.
Second, we propose a distribution margin loss that not only alleviates the inter-class overlap in embedding space but also enforces the intra-class compactness.
arXiv Detail & Related papers (2024-07-18T17:59:44Z) - Exploring Vacant Classes in Label-Skewed Federated Learning [113.65301899666645]
Label skews, characterized by disparities in local label distribution across clients, pose a significant challenge in federated learning.
This paper introduces FedVLS, a novel approach to label-skewed federated learning that integrates vacant-class distillation and logit suppression simultaneously.
arXiv Detail & Related papers (2024-01-04T16:06:31Z) - Class-Specific Distribution Alignment for Semi-Supervised Medical Image
Classification [14.343079589464994]
Class-Specific Distribution Alignment (CSDA) is a semi-supervised learning framework based on self-training.
We show that our method provides competitive performance on semi-supervised skin disease, thoracic disease, and endoscopic image classification tasks.
arXiv Detail & Related papers (2023-07-29T13:38:19Z) - SPLAL: Similarity-based pseudo-labeling with alignment loss for
semi-supervised medical image classification [11.435826510575879]
Semi-supervised learning (SSL) methods can mitigate challenges by leveraging both labeled and unlabeled data.
SSL methods for medical image classification need to address two key challenges: (1) estimating reliable pseudo-labels for the images in the unlabeled dataset and (2) reducing biases caused by class imbalance.
In this paper, we propose a novel SSL approach, SPLAL, that effectively addresses these challenges.
arXiv Detail & Related papers (2023-07-10T14:53:24Z) - An Embarrassingly Simple Baseline for Imbalanced Semi-Supervised
Learning [103.65758569417702]
Semi-supervised learning (SSL) has shown great promise in leveraging unlabeled data to improve model performance.
We consider a more realistic and challenging setting called imbalanced SSL, where imbalanced class distributions occur in both labeled and unlabeled data.
We study a simple yet overlooked baseline -- SimiS -- which tackles data imbalance by simply supplementing labeled data with pseudo-labels.
arXiv Detail & Related papers (2022-11-20T21:18:41Z) - Density-Aware Personalized Training for Risk Prediction in Imbalanced
Medical Data [89.79617468457393]
Training models with imbalance rate (class density discrepancy) may lead to suboptimal prediction.
We propose a framework for training models for this imbalance issue.
We demonstrate our model's improved performance in real-world medical datasets.
arXiv Detail & Related papers (2022-07-23T00:39:53Z) - PCCT: Progressive Class-Center Triplet Loss for Imbalanced Medical Image
Classification [55.703445291264]
Imbalanced training data is a significant challenge for medical image classification.
We propose a novel Progressive Class-Center Triplet (PCCT) framework to alleviate the class imbalance issue.
The PCCT framework works effectively for medical image classification with imbalanced training images.
arXiv Detail & Related papers (2022-07-11T11:43:51Z) - Phased Progressive Learning with Coupling-Regulation-Imbalance Loss for
Imbalanced Classification [11.673344551762822]
Deep neural networks generally perform poorly with datasets that suffer from quantity imbalance and classification difficulty imbalance between different classes.
A phased progressive learning schedule was proposed for smoothly transferring the training emphasis from representation learning to upper classifier training.
Our code will be open source soon.
arXiv Detail & Related papers (2022-05-24T14:46:39Z) - ACPL: Anti-curriculum Pseudo-labelling forSemi-supervised Medical Image
Classification [22.5935068122522]
We propose a new SSL algorithm, called anti-curriculum pseudo-labelling (ACPL)
ACPL introduces novel techniques to select informative unlabelled samples, improving training balance and allowing the model to work for both multi-label and multi-class problems.
Our method outperforms previous SOTA SSL methods on both datasets.
arXiv Detail & Related papers (2021-11-25T05:31:52Z) - PLM: Partial Label Masking for Imbalanced Multi-label Classification [59.68444804243782]
Neural networks trained on real-world datasets with long-tailed label distributions are biased towards frequent classes and perform poorly on infrequent classes.
We propose a method, Partial Label Masking (PLM), which utilizes this ratio during training.
Our method achieves strong performance when compared to existing methods on both multi-label (MultiMNIST and MSCOCO) and single-label (imbalanced CIFAR-10 and CIFAR-100) image classification datasets.
arXiv Detail & Related papers (2021-05-22T18:07:56Z) - Distribution Aligning Refinery of Pseudo-label for Imbalanced
Semi-supervised Learning [126.31716228319902]
We develop Distribution Aligning Refinery of Pseudo-label (DARP) algorithm.
We show that DARP is provably and efficiently compatible with state-of-the-art SSL schemes.
arXiv Detail & Related papers (2020-07-17T09:16:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.