Accelerated learning algorithms of general fuzzy min-max neural network
using a novel hyperbox selection rule
- URL: http://arxiv.org/abs/2003.11333v2
- Date: Tue, 19 May 2020 10:22:19 GMT
- Title: Accelerated learning algorithms of general fuzzy min-max neural network
using a novel hyperbox selection rule
- Authors: Thanh Tung Khuat and Bogdan Gabrys
- Abstract summary: The paper proposes a method to accelerate the training process of a general fuzzy min-max neural network.
The proposed approach is based on the mathematical formulas to form a branch-and-bound solution.
The experimental results indicated the significant decrease in training time of the proposed approach for both online and agglomerative learning algorithms.
- Score: 9.061408029414455
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper proposes a method to accelerate the training process of a general
fuzzy min-max neural network. The purpose is to reduce the unsuitable
hyperboxes selected as the potential candidates of the expansion step of
existing hyperboxes to cover a new input pattern in the online learning
algorithms or candidates of the hyperbox aggregation process in the
agglomerative learning algorithms. Our proposed approach is based on the
mathematical formulas to form a branch-and-bound solution aiming to remove the
hyperboxes which are certain not to satisfy expansion or aggregation
conditions, and in turn, decreasing the training time of learning algorithms.
The efficiency of the proposed method is assessed over a number of widely used
data sets. The experimental results indicated the significant decrease in
training time of the proposed approach for both online and agglomerative
learning algorithms. Notably, the training time of the online learning
algorithms is reduced from 1.2 to 12 times when using the proposed method,
while the agglomerative learning algorithms are accelerated from 7 to 37 times
on average.
Related papers
- Learning Rate Optimization for Deep Neural Networks Using Lipschitz Bandits [9.361762652324968]
A properly tuned learning rate leads to faster training and higher test accuracy.
We propose a Lipschitz bandit-driven approach for tuning the learning rate of neural networks.
arXiv Detail & Related papers (2024-09-15T16:21:55Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Gradient-only line searches to automatically determine learning rates
for a variety of stochastic training algorithms [0.0]
We study the application of the Gradient-Only Line Search that is Inexact (GOLS-I) to determine the learning rate schedule for a selection of popular neural network training algorithms.
GOLS-I's learning rate schedules are competitive with manually tuned learning rates, over seven optimization algorithms, three types of neural network architecture, 23 datasets and two loss functions.
arXiv Detail & Related papers (2020-06-29T08:59:31Z) - Enhancing accuracy of deep learning algorithms by training with
low-discrepancy sequences [15.2292571922932]
We propose a deep supervised learning algorithm based on low-discrepancy sequences as the training set.
We demonstrate that the proposed algorithm significantly outperforms standard deep learning algorithms for problems in moderately high dimensions.
arXiv Detail & Related papers (2020-05-26T08:14:00Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Subset Sampling For Progressive Neural Network Learning [106.12874293597754]
Progressive Neural Network Learning is a class of algorithms that incrementally construct the network's topology and optimize its parameters based on the training data.
We propose to speed up this process by exploiting subsets of training data at each incremental training step.
Experimental results in object, scene and face recognition problems demonstrate that the proposed approach speeds up the optimization procedure considerably.
arXiv Detail & Related papers (2020-02-17T18:57:33Z) - An improved online learning algorithm for general fuzzy min-max neural
network [11.631815277762257]
This paper proposes an improved version of the current online learning algorithm for a general fuzzy min-max neural network (GFMM)
The proposed approach does not use the contraction process for overlapping hyperboxes, which is more likely to increase the error rate.
In order to reduce the sensitivity to the training samples presentation order of this new on-line learning algorithm, a simple ensemble method is also proposed.
arXiv Detail & Related papers (2020-01-08T06:24:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.