Soft Confusion Matrix Classifier for Stream Classification
- URL: http://arxiv.org/abs/2109.07857v1
- Date: Thu, 16 Sep 2021 10:38:35 GMT
- Title: Soft Confusion Matrix Classifier for Stream Classification
- Authors: Pawel Trajdos, Marek Kurzynski
- Abstract summary: The main goal of the work is to develop a wrapping-classifier that allows incremental learning to classifiers that are unable to learn incrementally.
The proposed approach significantly outperforms the reference methods.
- Score: 1.827510863075184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, the issue of tailoring the soft confusion matrix (SCM) based
classifier to deal with stream learning task is addressed. The main goal of the
work is to develop a wrapping-classifier that allows incremental learning to
classifiers that are unable to learn incrementally. The goal is achieved by
making two improvements in the previously developed SCM classifier. The first
one is aimed at reducing the computational cost of the SCM classifier. To do
so, the definition of the fuzzy neighborhood of an object is changed. The
second one is aimed at effective dealing with the concept drift. This is done
by employing the ADWIN-driven concept drift detector that is not only used to
detect the drift but also to control the size of the neighbourhood. The
obtained experimental results show that the proposed approach significantly
outperforms the reference methods.
Related papers
- Enhancing Adaptive Deep Networks for Image Classification via Uncertainty-aware Decision Fusion [27.117531006305974]
We introduce the Collaborative Decision Making (CDM) module to enhance the inference performance of adaptive deep networks.
CDM incorporates an uncertainty-aware fusion method based on evidential deep learning (EDL), that utilizes the reliability (uncertainty values) from the first c-1 classifiers.
We also design a balance term that reduces fusion saturation and unfairness issues caused by EDL constraints to improve the fusion quality of CDM.
arXiv Detail & Related papers (2024-08-25T07:08:58Z) - Continual-MAE: Adaptive Distribution Masked Autoencoders for Continual Test-Time Adaptation [49.827306773992376]
Continual Test-Time Adaptation (CTTA) is proposed to migrate a source pre-trained model to continually changing target distributions.
Our proposed method attains state-of-the-art performance in both classification and segmentation CTTA tasks.
arXiv Detail & Related papers (2023-12-19T15:34:52Z) - MultIOD: Rehearsal-free Multihead Incremental Object Detector [17.236182938227163]
We propose MultIOD, a class-incremental object detector based on CenterNet.
We employ transfer learning between classes learned initially and those learned incrementally to tackle catastrophic forgetting.
Results show that our method outperforms state-of-the-art methods on two Pascal VOC datasets.
arXiv Detail & Related papers (2023-09-11T09:32:45Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Multiple Classifiers Based Maximum Classifier Discrepancy for
Unsupervised Domain Adaptation [25.114533037440896]
We propose to extend the structure of two classifiers to multiple classifiers to further boost its performance.
We demonstrate that, on average, adopting the structure of three classifiers normally yields the best performance as a trade-off between the accuracy and efficiency.
arXiv Detail & Related papers (2021-08-02T03:00:13Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - Aligning Pretraining for Detection via Object-Level Contrastive Learning [57.845286545603415]
Image-level contrastive representation learning has proven to be highly effective as a generic model for transfer learning.
We argue that this could be sub-optimal and thus advocate a design principle which encourages alignment between the self-supervised pretext task and the downstream task.
Our method, called Selective Object COntrastive learning (SoCo), achieves state-of-the-art results for transfer performance on COCO detection.
arXiv Detail & Related papers (2021-06-04T17:59:52Z) - Minimum Variance Embedded Auto-associative Kernel Extreme Learning
Machine for One-class Classification [1.4146420810689422]
VAAKELM is a novel extension of an auto-associative kernel extreme learning machine.
It embeds minimum variance information within its architecture and reduces the intra-class variance.
It follows a reconstruction-based approach to one-class classification and minimizes the reconstruction error.
arXiv Detail & Related papers (2020-11-24T17:00:30Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Analyzing the Real-World Applicability of DGA Classifiers [3.0969191504482243]
We propose a novel classifier for separating benign domains from domains generated by DGAs.
We evaluate their classification performance and compare them with respect to explainability, robustness, and training and classification speed.
Our newly proposed binary classifier generalizes well to other networks, is time-robust, and able to identify previously unknown DGAs.
arXiv Detail & Related papers (2020-06-19T12:34:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.