Class-wise Classifier Design Capable of Continual Learning using
Adaptive Resonance Theory-based Topological Clustering
- URL: http://arxiv.org/abs/2203.09879v2
- Date: Thu, 21 Sep 2023 05:46:01 GMT
- Title: Class-wise Classifier Design Capable of Continual Learning using
Adaptive Resonance Theory-based Topological Clustering
- Authors: Naoki Masuyama, Yusuke Nojima, Farhan Dawood, Zongying Liu
- Abstract summary: This paper proposes a supervised classification algorithm capable of continual learning by utilizing an Adaptive Resonance (ART)-based growing self-organizing clustering algorithm.
The ART-based clustering algorithm is theoretically capable of continual learning.
The proposed algorithm has superior classification performance compared with state-of-the-art clustering-based classification algorithms capable of continual learning.
- Score: 4.772368796656325
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper proposes a supervised classification algorithm capable of
continual learning by utilizing an Adaptive Resonance Theory (ART)-based
growing self-organizing clustering algorithm. The ART-based clustering
algorithm is theoretically capable of continual learning, and the proposed
algorithm independently applies it to each class of training data for
generating classifiers. Whenever an additional training data set from a new
class is given, a new ART-based clustering will be defined in a different
learning space. Thanks to the above-mentioned features, the proposed algorithm
realizes continual learning capability. Simulation experiments showed that the
proposed algorithm has superior classification performance compared with
state-of-the-art clustering-based classification algorithms capable of
continual learning.
Related papers
- A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation [121.0693322732454]
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods to enhance CLIP's performance in downstream tasks.
We revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
arXiv Detail & Related papers (2024-02-06T15:45:27Z) - Privacy-preserving Continual Federated Clustering via Adaptive Resonance
Theory [11.190614418770558]
In the clustering domain, various algorithms with a federated learning framework (i.e., federated clustering) have been actively studied.
This paper proposes a privacy-preserving continual federated clustering algorithm.
Experimental results with synthetic and real-world datasets show that the proposed algorithm has superior clustering performance.
arXiv Detail & Related papers (2023-09-07T05:45:47Z) - A Parameter-free Adaptive Resonance Theory-based Topological Clustering
Algorithm Capable of Continual Learning [20.995946115633963]
We propose a new parameter-free ART-based topological clustering algorithm capable of continual learning by introducing parameter estimation methods.
Experimental results with synthetic and real-world datasets show that the proposed algorithm has superior clustering performance to the state-of-the-art clustering algorithms without any parameter pre-specifications.
arXiv Detail & Related papers (2023-05-01T01:04:07Z) - Anomaly Detection using Ensemble Classification and Evidence Theory [62.997667081978825]
We present a novel approach for novel detection using ensemble classification and evidence theory.
A pool selection strategy is presented to build a solid ensemble classifier.
We use uncertainty for the anomaly detection approach.
arXiv Detail & Related papers (2022-12-23T00:50:41Z) - Towards Diverse Evaluation of Class Incremental Learning: A Representation Learning Perspective [67.45111837188685]
Class incremental learning (CIL) algorithms aim to continually learn new object classes from incrementally arriving data.
We experimentally analyze neural network models trained by CIL algorithms using various evaluation protocols in representation learning.
arXiv Detail & Related papers (2022-06-16T11:44:11Z) - Using Representation Expressiveness and Learnability to Evaluate
Self-Supervised Learning Methods [61.49061000562676]
We introduce Cluster Learnability (CL) to assess learnability.
CL is measured in terms of the performance of a KNN trained to predict labels obtained by clustering the representations with K-means.
We find that CL better correlates with in-distribution model performance than other competing recent evaluation schemes.
arXiv Detail & Related papers (2022-06-02T19:05:13Z) - Adaptive Resonance Theory-based Topological Clustering with a Divisive
Hierarchical Structure Capable of Continual Learning [8.581682204722894]
This paper proposes an ART-based topological clustering algorithm with a mechanism that automatically estimates a similarity threshold from a distribution of data points.
For the improving information extraction performance, a divisive hierarchical clustering algorithm capable of continual learning is proposed.
arXiv Detail & Related papers (2022-01-26T02:34:52Z) - Multi-label Classification via Adaptive Resonance Theory-based
Clustering [9.58897929546191]
The paper proposes a multi-label classification algorithm capable of continual learning by applying an Adaptive Resonance Theory (ART)-based clustering algorithm and the Bayesian approach for label probability computation.
Experimental results with synthetic and real-world multi-label datasets show that the proposed algorithm has competitive classification performance to other well-known algorithms.
arXiv Detail & Related papers (2021-03-02T06:51:41Z) - CAC: A Clustering Based Framework for Classification [20.372627144885158]
We design a simple, efficient, and generic framework called Classification Aware Clustering (CAC)
Our experiments on synthetic and real benchmark datasets demonstrate the efficacy of CAC over previous methods for combined clustering and classification.
arXiv Detail & Related papers (2021-02-23T18:59:39Z) - Network Classifiers Based on Social Learning [71.86764107527812]
We propose a new way of combining independently trained classifiers over space and time.
The proposed architecture is able to improve prediction performance over time with unlabeled data.
We show that this strategy results in consistent learning with high probability, and it yields a robust structure against poorly trained classifiers.
arXiv Detail & Related papers (2020-10-23T11:18:20Z) - Reinforcement Learning as Iterative and Amortised Inference [62.997667081978825]
We use the control as inference framework to outline a novel classification scheme based on amortised and iterative inference.
We show that taking this perspective allows us to identify parts of the algorithmic design space which have been relatively unexplored.
arXiv Detail & Related papers (2020-06-13T16:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.