Incremental cluster validity index-guided online learning for
performance and robustness to presentation order
- URL: http://arxiv.org/abs/2108.07743v1
- Date: Tue, 17 Aug 2021 16:24:25 GMT
- Title: Incremental cluster validity index-guided online learning for
performance and robustness to presentation order
- Authors: Leonardo Enzo Brito da Silva, Nagasharath Rayapati, Donald C. Wunsch
II
- Abstract summary: This work introduces the first adaptive resonance theory (ART)-based model that uses iCVIs for unsupervised and semi-supervised online learning.
It also shows for the first time how to use iCVIs to regulate ART vigilance via an iCVI-based match tracking mechanism.
The model achieves improved accuracy and robustness to ordering effects by integrating an online iCVI framework as module B of a topological adaptive resonance theory predictive mapping (TopoARTMAP)
- Score: 1.7403133838762446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In streaming data applications incoming samples are processed and discarded,
therefore, intelligent decision-making is crucial for the performance of
lifelong learning systems. In addition, the order in which samples arrive may
heavily affect the performance of online (and offline) incremental learners.
The recently introduced incremental cluster validity indices (iCVIs) provide
valuable aid in addressing such class of problems. Their primary use-case has
been cluster quality monitoring; nonetheless, they have been very recently
integrated in a streaming clustering method to assist the clustering task
itself. In this context, the work presented here introduces the first adaptive
resonance theory (ART)-based model that uses iCVIs for unsupervised and
semi-supervised online learning. Moreover, it shows for the first time how to
use iCVIs to regulate ART vigilance via an iCVI-based match tracking mechanism.
The model achieves improved accuracy and robustness to ordering effects by
integrating an online iCVI framework as module B of a topological adaptive
resonance theory predictive mapping (TopoARTMAP) -- thereby being named
iCVI-TopoARTMAP -- and by employing iCVI-driven post-processing heuristics at
the end of each learning step. The online iCVI framework provides assignments
of input samples to clusters at each iteration in accordance to any of several
iCVIs. The iCVI-TopoARTMAP maintains useful properties shared by ARTMAP models,
such as stability, immunity to catastrophic forgetting, and the many-to-one
mapping capability via the map field module. The performance (unsupervised and
semi-supervised) and robustness to presentation order (unsupervised) of
iCVI-TopoARTMAP were evaluated via experiments with a synthetic data set and
deep embeddings of a real-world face image data set.
Related papers
- Eigen-Cluster VIS: Improving Weakly-supervised Video Instance Segmentation by Leveraging Spatio-temporal Consistency [9.115508086522887]
This work introduces a novel weakly-supervised method called Eigen-cluster VIS.
It achieves competitive accuracy compared to other VIS approaches without requiring any mask annotations.
It is evaluated on the YouTube-VIS21 and OVIS 2019/20 datasets.
arXiv Detail & Related papers (2024-08-29T16:05:05Z) - What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights [67.72413262980272]
Severe data imbalance naturally exists among web-scale vision-language datasets.
We find CLIP pre-trained thereupon exhibits notable robustness to the data imbalance compared to supervised learning.
The robustness and discriminability of CLIP improve with more descriptive language supervision, larger data scale, and broader open-world concepts.
arXiv Detail & Related papers (2024-05-31T17:57:24Z) - Community-Aware Efficient Graph Contrastive Learning via Personalized
Self-Training [27.339318501446115]
We propose a Community-aware Efficient Graph Contrastive Learning Framework (CEGCL) to jointly learn community partition and node representations in an end-to-end manner.
We show that our CEGCL exhibits state-of-the-art performance on three benchmark datasets with different scales.
arXiv Detail & Related papers (2023-11-18T13:45:21Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Learning Deep Representations via Contrastive Learning for Instance
Retrieval [11.736450745549792]
This paper makes the first attempt that tackles the problem using instance-discrimination based contrastive learning (CL)
In this work, we approach this problem by exploring the capability of deriving discriminative representations from pre-trained and fine-tuned CL models.
arXiv Detail & Related papers (2022-09-28T04:36:34Z) - When CNN Meet with ViT: Towards Semi-Supervised Learning for Multi-Class
Medical Image Semantic Segmentation [13.911947592067678]
In this paper, an advanced consistency-aware pseudo-label-based self-ensembling approach is presented.
Our framework consists of a feature-learning module which is enhanced by ViT and CNN mutually, and a guidance module which is robust for consistency-aware purposes.
Experimental results show that the proposed method achieves state-of-the-art performance on a public benchmark data set.
arXiv Detail & Related papers (2022-08-12T18:21:22Z) - Using Representation Expressiveness and Learnability to Evaluate
Self-Supervised Learning Methods [61.49061000562676]
We introduce Cluster Learnability (CL) to assess learnability.
CL is measured in terms of the performance of a KNN trained to predict labels obtained by clustering the representations with K-means.
We find that CL better correlates with in-distribution model performance than other competing recent evaluation schemes.
arXiv Detail & Related papers (2022-06-02T19:05:13Z) - Self-Supervised Models are Continual Learners [79.70541692930108]
We show that self-supervised loss functions can be seamlessly converted into distillation mechanisms for Continual Learning.
We devise a framework for Continual self-supervised visual representation Learning that significantly improves the quality of the learned representations.
arXiv Detail & Related papers (2021-12-08T10:39:13Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - iCVI-ARTMAP: Accelerating and improving clustering using adaptive
resonance theory predictive mapping and incremental cluster validity indices [1.160208922584163]
iCVI-ARTMAP uses incremental cluster validity indices (iCVIs) to perform unsupervised learning.
It can achieve running times up to two orders of magnitude shorter than when using batch CVI computations.
arXiv Detail & Related papers (2020-08-22T19:37:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.