Flexible Sampling for Long-tailed Skin Lesion Classification
- URL: http://arxiv.org/abs/2204.03161v1
- Date: Thu, 7 Apr 2022 02:13:56 GMT
- Title: Flexible Sampling for Long-tailed Skin Lesion Classification
- Authors: Lie Ju, Yicheng Wu, Lin Wang, Zhen Yu, Xin Zhao, Xin Wang, Paul
Bonnington, Zongyuan Ge
- Abstract summary: Existing long-tailed learning methods treat each class equally to re-balance the long-tailed distribution.
We propose a curriculum learning-based framework called Flexible Sampling for the long-tailed skin lesion classification task.
- Score: 21.790337883680756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most of the medical tasks naturally exhibit a long-tailed distribution due to
the complex patient-level conditions and the existence of rare diseases.
Existing long-tailed learning methods usually treat each class equally to
re-balance the long-tailed distribution. However, considering that some
challenging classes may present diverse intra-class distributions, re-balancing
all classes equally may lead to a significant performance drop. To address
this, in this paper, we propose a curriculum learning-based framework called
Flexible Sampling for the long-tailed skin lesion classification task.
Specifically, we initially sample a subset of training data as anchor points
based on the individual class prototypes. Then, these anchor points are used to
pre-train an inference model to evaluate the per-class learning difficulty.
Finally, we use a curriculum sampling module to dynamically query new samples
from the rest training samples with the learning difficulty-aware sampling
probability. We evaluated our model against several state-of-the-art methods on
the ISIC dataset. The results with two long-tailed settings have demonstrated
the superiority of our proposed training strategy, which achieves a new
benchmark for long-tailed skin lesion classification.
Related papers
- Sub-Clustering for Class Distance Recalculation in Long-Tailed Drug Classification [3.015770349327888]
In the field of drug chemistry, certain tail classes exhibit higher identifiability during training due to their unique molecular structural features.
We propose a novel method that breaks away from the traditional static evaluation paradigm based on sample size.
arXiv Detail & Related papers (2025-04-07T00:09:10Z) - From Uncertainty to Clarity: Uncertainty-Guided Class-Incremental Learning for Limited Biomedical Samples via Semantic Expansion [0.0]
We propose a class-incremental learning method under limited samples in the biomedical field.
Our method achieves optimal performance, surpassing state-of-the-art methods by as much as 53.54% in accuracy.
arXiv Detail & Related papers (2024-09-12T05:22:45Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Twice Class Bias Correction for Imbalanced Semi-Supervised Learning [59.90429949214134]
We introduce a novel approach called textbfTwice textbfClass textbfBias textbfCorrection (textbfTCBC)
We estimate the class bias of the model parameters during the training process.
We apply a secondary correction to the model's pseudo-labels for unlabeled samples.
arXiv Detail & Related papers (2023-12-27T15:06:36Z) - Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot
Text Classification Tasks [75.42002070547267]
We propose a self evolution learning (SE) based mixup approach for data augmentation in text classification.
We introduce a novel instance specific label smoothing approach, which linearly interpolates the model's output and one hot labels of the original samples to generate new soft for label mixing up.
arXiv Detail & Related papers (2023-05-22T23:43:23Z) - Adaptive Distribution Calibration for Few-Shot Learning with
Hierarchical Optimal Transport [78.9167477093745]
We propose a novel distribution calibration method by learning the adaptive weight matrix between novel samples and base classes.
Experimental results on standard benchmarks demonstrate that our proposed plug-and-play model outperforms competing approaches.
arXiv Detail & Related papers (2022-10-09T02:32:57Z) - Semi-supervised Predictive Clustering Trees for (Hierarchical) Multi-label Classification [2.706328351174805]
We propose a hierarchical multi-label classification method based on semi-supervised learning of predictive clustering trees.
We also extend the method towards ensemble learning and propose a method based on the random forest approach.
arXiv Detail & Related papers (2022-07-19T12:49:00Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - An Empirical Study on the Joint Impact of Feature Selection and Data
Resampling on Imbalance Classification [4.506770920842088]
This study focuses on the synergy between feature selection and data resampling for imbalance classification.
We conduct a large amount of experiments on 52 publicly available datasets, using 9 feature selection methods, 6 resampling approaches for class imbalance learning, and 3 well-known classification algorithms.
arXiv Detail & Related papers (2021-09-01T06:01:51Z) - The Devil is the Classifier: Investigating Long Tail Relation
Classification with Decoupling Analysis [36.298869931803836]
Long-tailed relation classification is a challenging problem as the head classes may dominate the training phase.
We propose a robust classifier with attentive relation routing, which assigns soft weights by automatically aggregating the relations.
arXiv Detail & Related papers (2020-09-15T12:47:00Z) - Overcoming Classifier Imbalance for Long-tail Object Detection with
Balanced Group Softmax [88.11979569564427]
We provide the first systematic analysis on the underperformance of state-of-the-art models in front of long-tail distribution.
We propose a novel balanced group softmax (BAGS) module for balancing the classifiers within the detection frameworks through group-wise training.
Extensive experiments on the very recent long-tail large vocabulary object recognition benchmark LVIS show that our proposed BAGS significantly improves the performance of detectors.
arXiv Detail & Related papers (2020-06-18T10:24:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.