An Online Learning Algorithm for a Neuro-Fuzzy Classifier with
Mixed-Attribute Data
- URL: http://arxiv.org/abs/2009.14670v1
- Date: Wed, 30 Sep 2020 13:45:36 GMT
- Title: An Online Learning Algorithm for a Neuro-Fuzzy Classifier with
Mixed-Attribute Data
- Authors: Thanh Tung Khuat and Bogdan Gabrys
- Abstract summary: General fuzzy min-max neural network (GFMMNN) is one of the efficient neuro-fuzzy systems for data classification.
This paper proposes an extended online learning algorithm for the GFMMNN.
The proposed method can handle the datasets with both continuous and categorical features.
- Score: 9.061408029414455
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: General fuzzy min-max neural network (GFMMNN) is one of the efficient
neuro-fuzzy systems for data classification. However, one of the downsides of
its original learning algorithms is the inability to handle and learn from the
mixed-attribute data. While categorical features encoding methods can be used
with the GFMMNN learning algorithms, they exhibit a lot of shortcomings. Other
approaches proposed in the literature are not suitable for on-line learning as
they require entire training data available in the learning phase. With the
rapid change in the volume and velocity of streaming data in many application
areas, it is increasingly required that the constructed models can learn and
adapt to the continuous data changes in real-time without the need for their
full retraining or access to the historical data. This paper proposes an
extended online learning algorithm for the GFMMNN. The proposed method can
handle the datasets with both continuous and categorical features. The
extensive experiments confirmed superior and stable classification performance
of the proposed approach in comparison to other relevant learning algorithms
for the GFMM model.
Related papers
- SA-CNN: Application to text categorization issues using simulated
annealing-based convolutional neural network optimization [0.0]
Convolutional neural networks (CNNs) are a representative class of deep learning algorithms.
We introduce SA-CNN neural networks for text classification tasks based on Text-CNN neural networks.
arXiv Detail & Related papers (2023-03-13T14:27:34Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - A Novel Neural Network Training Framework with Data Assimilation [2.948167339160823]
A gradient-free training framework based on data assimilation is proposed to avoid the calculation of gradients.
The results show that the proposed training framework performed better than the gradient decent method.
arXiv Detail & Related papers (2020-10-06T11:12:23Z) - An in-depth comparison of methods handling mixed-attribute data for
general fuzzy min-max neural network [9.061408029414455]
We will compare and assess three main methods of handling datasets with mixed features.
The experimental results showed that the target and James-Stein are appropriate categorical encoding methods for learning algorithms of GFMM models.
The combination of GFMM neural networks and decision trees is a flexible way to enhance the classification performance of GFMM models on datasets with the mixed features.
arXiv Detail & Related papers (2020-09-01T05:12:22Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - A Neural Network Approach for Online Nonlinear Neyman-Pearson
Classification [3.6144103736375857]
We propose a novel Neyman-Pearson (NP) classifier that is both online and nonlinear as the first time in the literature.
The proposed classifier operates on a binary labeled data stream in an online manner, and maximizes the detection power about a user-specified and controllable false positive rate.
Our algorithm is appropriate for large scale data applications and provides a decent false positive rate controllability with real time processing.
arXiv Detail & Related papers (2020-06-14T20:00:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.