Identification of Nonlinear Dynamic Systems Using Type-2 Fuzzy Neural
Networks -- A Novel Learning Algorithm and a Comparative Study
- URL: http://arxiv.org/abs/2104.01713v1
- Date: Sun, 4 Apr 2021 23:44:59 GMT
- Title: Identification of Nonlinear Dynamic Systems Using Type-2 Fuzzy Neural
Networks -- A Novel Learning Algorithm and a Comparative Study
- Authors: Erkan Kayacan, Erdal Kayacan and Mojtaba Ahmadieh Khanesar
- Abstract summary: A sliding mode theory-based learning algorithm has been proposed to tune both the premise and consequent parts of type-2 fuzzy neural networks.
The stability of the proposed learning algorithm has been proved by using an appropriate Lyapunov function.
Several comparisons have been realized and shown that the proposed algorithm has faster convergence speed than the existing methods.
- Score: 12.77304082363491
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In order to achieve faster and more robust convergence (especially under
noisy working environments), a sliding mode theory-based learning algorithm has
been proposed to tune both the premise and consequent parts of type-2 fuzzy
neural networks in this paper. Differently from recent studies, where sliding
mode control theory-based rules are proposed for only the consequent part of
the network, the developed algorithm applies fully sliding mode parameter
update rules for both the premise and consequent parts of the type-2 fuzzy
neural networks. In addition, the responsible parameter for sharing the
contributions of the lower and upper parts of the type-2 fuzzy membership
functions is also tuned. Moreover, the learning rate of the network is updated
during the online training. The stability of the proposed learning algorithm
has been proved by using an appropriate Lyapunov function. Several comparisons
have been realized and shown that the proposed algorithm has faster convergence
speed than the existing methods such as gradient-based and swarm
intelligence-based methods. Moreover, the proposed learning algorithm has a
closed form, and it is easier to implement than the other existing methods.
Related papers
- Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - An Adaptive and Stability-Promoting Layerwise Training Approach for Sparse Deep Neural Network Architecture [0.0]
This work presents a two-stage adaptive framework for developing deep neural network (DNN) architectures that generalize well for a given training data set.
In the first stage, a layerwise training approach is adopted where a new layer is added each time and trained independently by freezing parameters in the previous layers.
We introduce a epsilon-delta stability-promoting concept as a desirable property for a learning algorithm and show that employing manifold regularization yields a epsilon-delta stability-promoting algorithm.
arXiv Detail & Related papers (2022-11-13T09:51:16Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Lifted Bregman Training of Neural Networks [28.03724379169264]
We introduce a novel mathematical formulation for the training of feed-forward neural networks with (potentially non-smooth) proximal maps as activation functions.
This formulation is based on Bregman and a key advantage is that its partial derivatives with respect to the network's parameters do not require the computation of derivatives of the network's activation functions.
We present several numerical results that demonstrate that these training approaches can be equally well or even better suited for the training of neural network-based classifiers and (denoising) autoencoders with sparse coding.
arXiv Detail & Related papers (2022-08-18T11:12:52Z) - On the Convergence of Distributed Stochastic Bilevel Optimization
Algorithms over a Network [55.56019538079826]
Bilevel optimization has been applied to a wide variety of machine learning models.
Most existing algorithms restrict their single-machine setting so that they are incapable of handling distributed data.
We develop novel decentralized bilevel optimization algorithms based on a gradient tracking communication mechanism and two different gradients.
arXiv Detail & Related papers (2022-06-30T05:29:52Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z) - Accelerated learning algorithms of general fuzzy min-max neural network
using a novel hyperbox selection rule [9.061408029414455]
The paper proposes a method to accelerate the training process of a general fuzzy min-max neural network.
The proposed approach is based on the mathematical formulas to form a branch-and-bound solution.
The experimental results indicated the significant decrease in training time of the proposed approach for both online and agglomerative learning algorithms.
arXiv Detail & Related papers (2020-03-25T11:26:18Z) - An improved online learning algorithm for general fuzzy min-max neural
network [11.631815277762257]
This paper proposes an improved version of the current online learning algorithm for a general fuzzy min-max neural network (GFMM)
The proposed approach does not use the contraction process for overlapping hyperboxes, which is more likely to increase the error rate.
In order to reduce the sensitivity to the training samples presentation order of this new on-line learning algorithm, a simple ensemble method is also proposed.
arXiv Detail & Related papers (2020-01-08T06:24:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.