An Ensemble Semi-Supervised Adaptive Resonance Theory Model with
Explanation Capability for Pattern Classification
- URL: http://arxiv.org/abs/2305.14373v1
- Date: Fri, 19 May 2023 20:20:44 GMT
- Title: An Ensemble Semi-Supervised Adaptive Resonance Theory Model with
Explanation Capability for Pattern Classification
- Authors: Farhad Pourpanah and Chee Peng Lim and Ali Etemad and Q. M. Jonathan
Wu
- Abstract summary: This paper proposes a new interpretable SSL model using the supervised and unsupervised Adaptive Resonance Theory (ART) family of networks.
The main advantages of SSL-ART include the capability of performing online learning and reducing the number of redundant prototype nodes.
A weighted voting strategy is introduced to form an ensemble SSL-ART model, which is denoted as WESSL-ART.
- Score: 41.35711585943589
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most semi-supervised learning (SSL) models entail complex structures and
iterative training processes as well as face difficulties in interpreting their
predictions to users. To address these issues, this paper proposes a new
interpretable SSL model using the supervised and unsupervised Adaptive
Resonance Theory (ART) family of networks, which is denoted as SSL-ART.
Firstly, SSL-ART adopts an unsupervised fuzzy ART network to create a number of
prototype nodes using unlabeled samples. Then, it leverages a supervised fuzzy
ARTMAP structure to map the established prototype nodes to the target classes
using labeled samples. Specifically, a one-to-many (OtM) mapping scheme is
devised to associate a prototype node with more than one class label. The main
advantages of SSL-ART include the capability of: (i) performing online
learning, (ii) reducing the number of redundant prototype nodes through the OtM
mapping scheme and minimizing the effects of noisy samples, and (iii) providing
an explanation facility for users to interpret the predicted outcomes. In
addition, a weighted voting strategy is introduced to form an ensemble SSL-ART
model, which is denoted as WESSL-ART. Every ensemble member, i.e., SSL-ART,
assigns {\color{black}a different weight} to each class based on its
performance pertaining to the corresponding class. The aim is to mitigate the
effects of training data sequences on all SSL-ART members and improve the
overall performance of WESSL-ART. The experimental results on eighteen
benchmark data sets, three artificially generated data sets, and a real-world
case study indicate the benefits of the proposed SSL-ART and WESSL-ART models
for tackling pattern classification problems.
Related papers
- Few-Shot Inspired Generative Zero-Shot Learning [14.66239393852298]
Generative zero-shot learning (ZSL) methods typically synthesize visual features for unseen classes.<n>We propose FSIGenZ, a few-shot-inspired generative ZSL framework that reduces reliance on large-scale feature synthesis.<n>Experiments on SUN, AwA2, and CUB benchmarks demonstrate that FSIGenZ achieves competitive performance using far fewer synthetic features.
arXiv Detail & Related papers (2025-06-18T02:39:36Z) - SSLR: A Semi-Supervised Learning Method for Isolated Sign Language Recognition [2.409285779772107]
Sign language recognition systems aim to recognize sign gestures and translate them into spoken language.
One of the main challenges in SLR is the scarcity of annotated datasets.
We propose a semi-supervised learning approach for SLR, employing a pseudo-label method to annotate unlabeled samples.
arXiv Detail & Related papers (2025-04-23T11:59:52Z) - Unbiased Max-Min Embedding Classification for Transductive Few-Shot Learning: Clustering and Classification Are All You Need [83.10178754323955]
Few-shot learning enables models to generalize from only a few labeled examples.
We propose the Unbiased Max-Min Embedding Classification (UMMEC) Method, which addresses the key challenges in few-shot learning.
Our method significantly improves classification performance with minimal labeled data, advancing the state-of-the-art in annotatedL.
arXiv Detail & Related papers (2025-03-28T07:23:07Z) - A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification [51.35500308126506]
Self-supervised learning (SSL) is a machine learning approach where the data itself provides supervision, eliminating the need for external labels.
We study how classification-based evaluation protocols for SSL correlate and how well they predict downstream performance on different dataset types.
arXiv Detail & Related papers (2024-07-16T23:17:36Z) - Towards Generic Semi-Supervised Framework for Volumetric Medical Image
Segmentation [19.09640071505051]
We develop a generic SSL framework to handle settings such as UDA and SemiDG.
We evaluate our proposed framework on four benchmark datasets for SSL, Class-imbalanced SSL, UDA and SemiDG.
The results showcase notable improvements compared to state-of-the-art methods across all four settings.
arXiv Detail & Related papers (2023-10-17T14:58:18Z) - Deciphering the Projection Head: Representation Evaluation
Self-supervised Learning [6.375931203397043]
Self-supervised learning (SSL) aims to learn intrinsic features without labels.
Projection head always plays an important role in improving the performance of the downstream task.
We propose a Representation Evaluation Design (RED) in SSL models in which a shortcut connection between the representation and the projection vectors is built.
arXiv Detail & Related papers (2023-01-28T13:13:53Z) - Self-Supervised PPG Representation Learning Shows High Inter-Subject
Variability [3.8036939971290007]
We propose a Self-Supervised Learning (SSL) method with a pretext task of signal reconstruction to learn an informative generalized PPG representation.
Results show that in a very limited label data setting (10 samples per class or less), using SSL is beneficial.
SSL may pave the way for the broader use of machine learning models on PPG data in label-scarce regimes.
arXiv Detail & Related papers (2022-12-07T19:02:45Z) - DUET: Cross-modal Semantic Grounding for Contrastive Zero-shot Learning [37.48292304239107]
We present a transformer-based end-to-end ZSL method named DUET.
We develop a cross-modal semantic grounding network to investigate the model's capability of disentangling semantic attributes from the images.
We find that DUET can often achieve state-of-the-art performance, its components are effective and its predictions are interpretable.
arXiv Detail & Related papers (2022-07-04T11:12:12Z) - A Strong Baseline for Semi-Supervised Incremental Few-Shot Learning [54.617688468341704]
Few-shot learning aims to learn models that generalize to novel classes with limited training samples.
We propose a novel paradigm containing two parts: (1) a well-designed meta-training algorithm for mitigating ambiguity between base and novel classes caused by unreliable pseudo labels and (2) a model adaptation mechanism to learn discriminative features for novel classes while preserving base knowledge using few labeled and all the unlabeled data.
arXiv Detail & Related papers (2021-10-21T13:25:52Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - Self-supervised Regularization for Text Classification [14.824073299035675]
In many real-world problems, the number of texts for training classification models is limited, which renders these models prone to overfitting.
We propose SSL-Reg, a data-dependent regularization approach based on self-supervised learning (SSL)
SSL is an unsupervised learning approach which defines auxiliary tasks on input data without using any human-provided labels.
arXiv Detail & Related papers (2021-03-09T05:35:52Z) - Self-Supervised Learning of Graph Neural Networks: A Unified Review [50.71341657322391]
Self-supervised learning is emerging as a new paradigm for making use of large amounts of unlabeled samples.
We provide a unified review of different ways of training graph neural networks (GNNs) using SSL.
Our treatment of SSL methods for GNNs sheds light on the similarities and differences of various methods, setting the stage for developing new methods and algorithms.
arXiv Detail & Related papers (2021-02-22T03:43:45Z) - SemiNLL: A Framework of Noisy-Label Learning by Semi-Supervised Learning [58.26384597768118]
SemiNLL is a versatile framework that combines SS strategies and SSL models in an end-to-end manner.
Our framework can absorb various SS strategies and SSL backbones, utilizing their power to achieve promising performance.
arXiv Detail & Related papers (2020-12-02T01:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.