One vs Previous and Similar Classes Learning -- A Comparative Study
- URL: http://arxiv.org/abs/2101.01294v1
- Date: Tue, 5 Jan 2021 00:28:38 GMT
- Title: One vs Previous and Similar Classes Learning -- A Comparative Study
- Authors: Daniel Cauchi, Adrian Muscat
- Abstract summary: This work proposes three learning paradigms which allow trained models to be updated without the need of retraining from scratch.
Results show that the proposed paradigms are faster than the baseline at updating, with two of them being faster at training from scratch as well, especially on larger datasets.
- Score: 2.208242292882514
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When dealing with multi-class classification problems, it is common practice
to build a model consisting of a series of binary classifiers using a learning
paradigm which dictates how the classifiers are built and combined to
discriminate between the individual classes. As new data enters the system and
the model needs updating, these models would often need to be retrained from
scratch. This work proposes three learning paradigms which allow trained models
to be updated without the need of retraining from scratch. A comparative
analysis is performed to evaluate them against a baseline. Results show that
the proposed paradigms are faster than the baseline at updating, with two of
them being faster at training from scratch as well, especially on larger
datasets, while retaining a comparable classification performance.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Simple-Sampling and Hard-Mixup with Prototypes to Rebalance Contrastive Learning for Text Classification [11.072083437769093]
We propose a novel model named SharpReCL for imbalanced text classification tasks.
Our model even outperforms popular large language models across several datasets.
arXiv Detail & Related papers (2024-05-19T11:33:49Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - GMM-IL: Image Classification using Incrementally Learnt, Independent
Probabilistic Models for Small Sample Sizes [0.4511923587827301]
We present a novel two stage architecture which couples visual feature learning with probabilistic models to represent each class.
We outperform a benchmark of an equivalent network with a Softmax head, obtaining increased accuracy for sample sizes smaller than 12 and increased weighted F1 score for 3 imbalanced class profiles.
arXiv Detail & Related papers (2022-12-01T15:19:42Z) - Are Deep Sequence Classifiers Good at Non-Trivial Generalization? [4.941630596191806]
We study binary sequence classification problems and we look at model calibration from a different perspective.
We focus on sparse sequence classification, that is problems in which the target class is rare and compare three deep learning sequence classification models.
Our results suggest that in this binary setting the deep-learning models are indeed able to learn the underlying class distribution in a non-trivial manner.
arXiv Detail & Related papers (2022-10-24T10:01:06Z) - Multi-Granularity Regularized Re-Balancing for Class Incremental
Learning [32.52884416761171]
Deep learning models suffer from catastrophic forgetting when learning new tasks.
Data imbalance between old and new classes is a key issue that leads to performance degradation of the model.
We propose an assumption-agnostic method, Multi-Granularity Regularized re-Balancing, to address this problem.
arXiv Detail & Related papers (2022-06-30T11:04:51Z) - Revisiting the Updates of a Pre-trained Model for Few-shot Learning [11.871523410051527]
We compare the two popular updating methods, fine-tuning and linear probing.
We find that fine-tuning is better than linear probing as the number of samples increases.
arXiv Detail & Related papers (2022-05-13T08:47:06Z) - Class-Incremental Learning with Strong Pre-trained Models [97.84755144148535]
Class-incremental learning (CIL) has been widely studied under the setting of starting from a small number of classes (base classes)
We explore an understudied real-world setting of CIL that starts with a strong model pre-trained on a large number of base classes.
Our proposed method is robust and generalizes to all analyzed CIL settings.
arXiv Detail & Related papers (2022-04-07T17:58:07Z) - Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning [141.35105358670316]
We study the difference between a na"ively-trained initial-phase model and the oracle model.
We propose Class-wise Decorrelation (CwD) that effectively regularizes representations of each class to scatter more uniformly.
Our CwD is simple to implement and easy to plug into existing methods.
arXiv Detail & Related papers (2021-12-09T07:20:32Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Learning Adaptive Embedding Considering Incremental Class [55.21855842960139]
Class-Incremental Learning (CIL) aims to train a reliable model with the streaming data, which emerges unknown classes sequentially.
Different from traditional closed set learning, CIL has two main challenges: 1) Novel class detection.
After the novel classes are detected, the model needs to be updated without re-training using entire previous data.
arXiv Detail & Related papers (2020-08-31T04:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.