An improved online learning algorithm for general fuzzy min-max neural
network
- URL: http://arxiv.org/abs/2001.02391v1
- Date: Wed, 8 Jan 2020 06:24:40 GMT
- Title: An improved online learning algorithm for general fuzzy min-max neural
network
- Authors: Thanh Tung Khuat, Fang Chen, Bogdan Gabrys
- Abstract summary: This paper proposes an improved version of the current online learning algorithm for a general fuzzy min-max neural network (GFMM)
The proposed approach does not use the contraction process for overlapping hyperboxes, which is more likely to increase the error rate.
In order to reduce the sensitivity to the training samples presentation order of this new on-line learning algorithm, a simple ensemble method is also proposed.
- Score: 11.631815277762257
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper proposes an improved version of the current online learning
algorithm for a general fuzzy min-max neural network (GFMM) to tackle existing
issues concerning expansion and contraction steps as well as the way of dealing
with unseen data located on decision boundaries. These drawbacks lower its
classification performance, so an improved algorithm is proposed in this study
to address the above limitations. The proposed approach does not use the
contraction process for overlapping hyperboxes, which is more likely to
increase the error rate as shown in the literature. The empirical results
indicated the improvement in the classification accuracy and stability of the
proposed method compared to the original version and other fuzzy min-max
classifiers. In order to reduce the sensitivity to the training samples
presentation order of this new on-line learning algorithm, a simple ensemble
method is also proposed.
Related papers
- Learning by the F-adjoint [0.0]
In this work, we develop and investigate this theoretical framework to improve some supervised learning algorithm for feed-forward neural network.
Our main result is that by introducing some neural dynamical model combined by the gradient descent algorithm, we derived an equilibrium F-adjoint process.
Experimental results on MNIST and Fashion-MNIST datasets, demonstrate that the proposed approach provide a significant improvements on the standard back-propagation training procedure.
arXiv Detail & Related papers (2024-07-08T13:49:25Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Online Learning Under A Separable Stochastic Approximation Framework [20.26530917721778]
We propose an online learning algorithm for a class of machine learning models under a separable approximation framework.
We show that the proposed algorithm produces more robust and test performance when compared to other popular learning algorithms.
arXiv Detail & Related papers (2023-05-12T13:53:03Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Large-Scale Sequential Learning for Recommender and Engineering Systems [91.3755431537592]
In this thesis, we focus on the design of an automatic algorithms that provide personalized ranking by adapting to the current conditions.
For the former, we propose novel algorithm called SAROS that take into account both kinds of feedback for learning over the sequence of interactions.
The proposed idea of taking into account the neighbour lines shows statistically significant results in comparison with the initial approach for faults detection in power grid.
arXiv Detail & Related papers (2022-05-13T21:09:41Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Accelerated learning algorithms of general fuzzy min-max neural network
using a novel hyperbox selection rule [9.061408029414455]
The paper proposes a method to accelerate the training process of a general fuzzy min-max neural network.
The proposed approach is based on the mathematical formulas to form a branch-and-bound solution.
The experimental results indicated the significant decrease in training time of the proposed approach for both online and agglomerative learning algorithms.
arXiv Detail & Related papers (2020-03-25T11:26:18Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.