Local overlap reduction procedure for dynamic ensemble selection
- URL: http://arxiv.org/abs/2206.08455v1
- Date: Thu, 16 Jun 2022 21:31:05 GMT
- Title: Local overlap reduction procedure for dynamic ensemble selection
- Authors: Mariana A. Souza, Robert Sabourin, George D. C. Cavalcanti and Rafael
M. O. Cruz
- Abstract summary: Class imbalance is a characteristic known for making learning more challenging for classification models.
We propose a DS technique which attempts to minimize the effects of the local class overlap during the classification procedure.
Experimental results show that the proposed technique can significantly outperform the baseline.
- Score: 13.304462985219237
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Class imbalance is a characteristic known for making learning more
challenging for classification models as they may end up biased towards the
majority class. A promising approach among the ensemble-based methods in the
context of imbalance learning is Dynamic Selection (DS). DS techniques single
out a subset of the classifiers in the ensemble to label each given unknown
sample according to their estimated competence in the area surrounding the
query. Because only a small region is taken into account in the selection
scheme, the global class disproportion may have less impact over the system's
performance. However, the presence of local class overlap may severely hinder
the DS techniques' performance over imbalanced distributions as it not only
exacerbates the effects of the under-representation but also introduces
ambiguous and possibly unreliable samples to the competence estimation process.
Thus, in this work, we propose a DS technique which attempts to minimize the
effects of the local class overlap during the classifier selection procedure.
The proposed method iteratively removes from the target region the instance
perceived as the hardest to classify until a classifier is deemed competent to
label the query sample. The known samples are characterized using instance
hardness measures that quantify the local class overlap. Experimental results
show that the proposed technique can significantly outperform the baseline as
well as several other DS techniques, suggesting its suitability for dealing
with class under-representation and overlap. Furthermore, the proposed
technique still yielded competitive results when using an under-sampled, less
overlapped version of the labelled sets, specially over the problems with a
high proportion of minority class samples in overlap areas. Code available at
https://github.com/marianaasouza/lords.
Related papers
- Collaborative Feature-Logits Contrastive Learning for Open-Set Semi-Supervised Object Detection [75.02249869573994]
In open-set scenarios, the unlabeled dataset contains both in-distribution (ID) classes and out-of-distribution (OOD) classes.
Applying semi-supervised detectors in such settings can lead to misclassifying OOD class as ID classes.
We propose a simple yet effective method, termed Collaborative Feature-Logits Detector (CFL-Detector)
arXiv Detail & Related papers (2024-11-20T02:57:35Z) - Confronting Discrimination in Classification: Smote Based on
Marginalized Minorities in the Kernel Space for Imbalanced Data [0.0]
We propose a novel classification oversampling approach based on the decision boundary and sample proximity relationships.
We test the proposed method on a classic financial fraud dataset.
arXiv Detail & Related papers (2024-02-13T04:03:09Z) - Towards Fast and Stable Federated Learning: Confronting Heterogeneity
via Knowledge Anchor [18.696420390977863]
This paper systematically analyzes the forgetting degree of each class during local training across different communication rounds.
Motivated by these findings, we propose a novel and straightforward algorithm called Federated Knowledge Anchor (FedKA)
arXiv Detail & Related papers (2023-12-05T01:12:56Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection [98.66771688028426]
We propose a Ambiguity-Resistant Semi-supervised Learning (ARSL) for one-stage detectors.
Joint-Confidence Estimation (JCE) is proposed to quantifies the classification and localization quality of pseudo labels.
ARSL effectively mitigates the ambiguities and achieves state-of-the-art SSOD performance on MS COCO and PASCAL VOC.
arXiv Detail & Related papers (2023-03-27T07:46:58Z) - Boosting Few-Shot Text Classification via Distribution Estimation [38.99459686893034]
We propose two simple yet effective strategies to estimate the distributions of the novel classes by utilizing unlabeled query samples.
Specifically, we first assume a class or sample follows the Gaussian distribution, and use the original support set and the nearest few query samples.
Then, we augment the labeled samples by sampling from the estimated distribution, which can provide sufficient supervision for training the classification model.
arXiv Detail & Related papers (2023-03-26T05:58:39Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Class-Imbalanced Complementary-Label Learning via Weighted Loss [8.934943507699131]
Complementary-label learning (CLL) is widely used in weakly supervised classification.
It faces a significant challenge in real-world datasets when confronted with class-imbalanced training samples.
We propose a novel problem setting that enables learning from class-imbalanced complementary labels for multi-class classification.
arXiv Detail & Related papers (2022-09-28T16:02:42Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Beyond cross-entropy: learning highly separable feature distributions
for robust and accurate classification [22.806324361016863]
We propose a novel approach for training deep robust multiclass classifiers that provides adversarial robustness.
We show that the regularization of the latent space based on our approach yields excellent classification accuracy.
arXiv Detail & Related papers (2020-10-29T11:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.