Evolving Multi-Label Fuzzy Classifier
- URL: http://arxiv.org/abs/2203.15318v1
- Date: Tue, 29 Mar 2022 08:01:03 GMT
- Title: Evolving Multi-Label Fuzzy Classifier
- Authors: Edwin Lughofer
- Abstract summary: Multi-label classification has attracted much attention in the machine learning community to address the problem of assigning single samples to more than one class at the same time.
We propose an evolving multi-label fuzzy classifier (EFC-ML) which is able to self-adapt and self-evolve its structure with new incoming multi-label samples in an incremental, single-pass manner.
- Score: 5.53329677986653
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-label classification has attracted much attention in the machine
learning community to address the problem of assigning single samples to more
than one class at the same time. We propose an evolving multi-label fuzzy
classifier (EFC-ML) which is able to self-adapt and self-evolve its structure
with new incoming multi-label samples in an incremental, single-pass manner. It
is based on a multi-output Takagi-Sugeno type architecture, where for each
class a separate consequent hyper-plane is defined. The learning procedure
embeds a locally weighted incremental correlation-based algorithm combined with
(conventional) recursive fuzzily weighted least squares and Lasso-based
regularization. The correlation-based part ensures that the interrelations
between class labels, a specific well-known property in multi-label
classification for improved performance, are preserved properly; the
Lasso-based regularization reduces the curse of dimensionality effects in the
case of a higher number of inputs. Antecedent learning is achieved by
product-space clustering and conducted for all class labels together, which
yields a single rule base, allowing a compact knowledge view. Furthermore, our
approach comes with an online active learning (AL) strategy for updating the
classifier on just a number of selected samples, which in turn makes the
approach applicable for scarcely labelled streams in applications, where the
annotation effort is typically expensive. Our approach was evaluated on several
data sets from the MULAN repository and showed significantly improved
classification accuracy compared to (evolving) one-versus-rest or classifier
chaining concepts. A significant result was that, due to the online AL method,
a 90\% reduction in the number of samples used for classifier updates had
little effect on the accumulated accuracy trend lines compared to a full update
in most data set cases.
Related papers
- Label Cluster Chains for Multi-Label Classification [2.072831155509228]
Multi-label classification is a type of supervised machine learning that can simultaneously assign multiple labels to an instance.
We propose a method to chain disjoint correlated label clusters obtained by applying a partition method in the label space.
Our proposal shows that learning and chaining disjoint correlated label clusters can better explore and learn label correlations.
arXiv Detail & Related papers (2024-11-01T11:16:37Z) - Dynamic Correlation Learning and Regularization for Multi-Label Confidence Calibration [60.95748658638956]
This paper introduces the Multi-Label Confidence task, aiming to provide well-calibrated confidence scores in multi-label scenarios.
Existing single-label calibration methods fail to account for category correlations, which are crucial for addressing semantic confusion.
We propose the Dynamic Correlation Learning and Regularization algorithm, which leverages multi-grained semantic correlations to better model semantic confusion.
arXiv Detail & Related papers (2024-07-09T13:26:21Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Class-Incremental Lifelong Learning in Multi-Label Classification [3.711485819097916]
This paper studies Lifelong Multi-Label (LML) classification, which builds an online class-incremental classifier in a sequential multi-label classification data stream.
To solve the problem, the study proposes an Augmented Graph Convolutional Network (AGCN) with a built Augmented Correlation Matrix (ACM) across sequential partial-label tasks.
arXiv Detail & Related papers (2022-07-16T05:14:07Z) - Self-Training: A Survey [5.772546394254112]
Semi-supervised algorithms aim to learn prediction functions from a small set of labeled observations and a large set of unlabeled observations.
Among the existing techniques, self-training methods have undoubtedly attracted greater attention in recent years.
We present self-training methods for binary and multi-class classification; as well as their variants and two related approaches.
arXiv Detail & Related papers (2022-02-24T11:40:44Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Gated recurrent units and temporal convolutional network for multilabel
classification [122.84638446560663]
This work proposes a new ensemble method for managing multilabel classification.
The core of the proposed approach combines a set of gated recurrent units and temporal convolutional neural networks trained with variants of the Adam gradients optimization approach.
arXiv Detail & Related papers (2021-10-09T00:00:16Z) - Improved Multi-label Classification with Frequent Label-set Mining and
Association [14.150518141172434]
A novel approach of frequent label-set mining has been proposed to extract correlated classes from the label-sets of the data.
A concept of certain and uncertain scores has been defined here, where the proposed method aims to improve the uncertain scores with the help of the certain scores and their corresponding CP-CA rules.
arXiv Detail & Related papers (2021-09-22T15:36:46Z) - PLM: Partial Label Masking for Imbalanced Multi-label Classification [59.68444804243782]
Neural networks trained on real-world datasets with long-tailed label distributions are biased towards frequent classes and perform poorly on infrequent classes.
We propose a method, Partial Label Masking (PLM), which utilizes this ratio during training.
Our method achieves strong performance when compared to existing methods on both multi-label (MultiMNIST and MSCOCO) and single-label (imbalanced CIFAR-10 and CIFAR-100) image classification datasets.
arXiv Detail & Related papers (2021-05-22T18:07:56Z) - Global Multiclass Classification and Dataset Construction via
Heterogeneous Local Experts [37.27708297562079]
We show how to minimize the number of labelers while ensuring the reliability of the resulting dataset.
Experiments with the MNIST and CIFAR-10 datasets demonstrate the favorable accuracy of our aggregation scheme.
arXiv Detail & Related papers (2020-05-21T18:07:42Z) - Federated Learning with Only Positive Labels [71.63836379169315]
We propose a generic framework for training with only positive labels, namely Federated Averaging with Spreadout (FedAwS)
We show, both theoretically and empirically, that FedAwS can almost match the performance of conventional learning where users have access to negative labels.
arXiv Detail & Related papers (2020-04-21T23:35:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.