A New Self-organizing Interval Type-2 Fuzzy Neural Network for Multi-Step Time Series Prediction
- URL: http://arxiv.org/abs/2407.08010v1
- Date: Wed, 10 Jul 2024 19:35:44 GMT
- Title: A New Self-organizing Interval Type-2 Fuzzy Neural Network for Multi-Step Time Series Prediction
- Authors: Fulong Yao, Wanqing Zhao, Matthew Forshaw, Yang Song,
- Abstract summary: This paper proposes a new self-organizing interval type-2 fuzzy neural network with multiple outputs (SOIT2FNN-MO) for multi-step time series prediction.
A nine-layer network is developed to improve prediction accuracy, uncertainty handling and model interpretability.
- Score: 9.546043411729206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a new self-organizing interval type-2 fuzzy neural network with multiple outputs (SOIT2FNN-MO) for multi-step time series prediction. Differing from the traditional six-layer IT2FNN, a nine-layer network is developed to improve prediction accuracy, uncertainty handling and model interpretability. First, a new co-antecedent layer and a modified consequent layer are devised to improve the interpretability of the fuzzy model for multi-step predictions. Second, a new transformation layer is designed to address the potential issues in the vanished rule firing strength caused by highdimensional inputs. Third, a new link layer is proposed to build temporal connections between multi-step predictions. Furthermore, a two-stage self-organizing mechanism is developed to automatically generate the fuzzy rules, in which the first stage is used to create the rule base from empty and perform the initial optimization, while the second stage is to fine-tune all network parameters. Finally, various simulations are carried out on chaotic and microgrid time series prediction problems, demonstrating the superiority of our approach in terms of prediction accuracy, uncertainty handling and model interpretability.
Related papers
- In-Context Convergence of Transformers [63.04956160537308]
We study the learning dynamics of a one-layer transformer with softmax attention trained via gradient descent.
For data with imbalanced features, we show that the learning dynamics take a stage-wise convergence process.
arXiv Detail & Related papers (2023-10-08T17:55:33Z) - TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series [57.4208255711412]
Building on copula theory, we propose a simplified objective for the recently-introduced transformer-based attentional copulas (TACTiS)
We show that the resulting model has significantly better training dynamics and achieves state-of-the-art performance across diverse real-world forecasting tasks.
arXiv Detail & Related papers (2023-10-02T16:45:19Z) - Variational Density Propagation Continual Learning [0.0]
Deep Neural Networks (DNNs) deployed to the real world are regularly subject to out-of-distribution (OoD) data.
This paper proposes a framework for adapting to data distribution drift modeled by benchmark Continual Learning datasets.
arXiv Detail & Related papers (2023-08-22T21:51:39Z) - Contextually Enhanced ES-dRNN with Dynamic Attention for Short-Term Load
Forecasting [1.1602089225841632]
The proposed model is composed of two simultaneously trained tracks: the context track and the main track.
The RNN architecture consists of multiple recurrent layers stacked with hierarchical dilations and equipped with recently proposed attentive recurrent cells.
The model produces both point forecasts and predictive intervals.
arXiv Detail & Related papers (2022-12-18T07:42:48Z) - An Adaptive and Stability-Promoting Layerwise Training Approach for Sparse Deep Neural Network Architecture [0.0]
This work presents a two-stage adaptive framework for developing deep neural network (DNN) architectures that generalize well for a given training data set.
In the first stage, a layerwise training approach is adopted where a new layer is added each time and trained independently by freezing parameters in the previous layers.
We introduce a epsilon-delta stability-promoting concept as a desirable property for a learning algorithm and show that employing manifold regularization yields a epsilon-delta stability-promoting algorithm.
arXiv Detail & Related papers (2022-11-13T09:51:16Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Semi-supervised Impedance Inversion by Bayesian Neural Network Based on
2-d CNN Pre-training [0.966840768820136]
We improve the semi-supervised learning from two aspects.
First, by replacing 1-d convolutional neural network layers in deep learning structure with 2-d CNN layers and 2-d maxpooling layers, the prediction accuracy is improved.
Second, prediction uncertainty can also be estimated by embedding the network into a Bayesian inference framework.
arXiv Detail & Related papers (2021-11-20T14:12:05Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Predicting Temporal Sets with Deep Neural Networks [50.53727580527024]
We propose an integrated solution based on the deep neural networks for temporal sets prediction.
A unique perspective is to learn element relationship by constructing set-level co-occurrence graph.
We design an attention-based module to adaptively learn the temporal dependency of elements and sets.
arXiv Detail & Related papers (2020-06-20T03:29:02Z) - Link Prediction for Temporally Consistent Networks [6.981204218036187]
Link prediction estimates the next relationship in dynamic networks.
The use of adjacency matrix to represent dynamically evolving networks limits the ability to analytically learn from heterogeneous, sparse, or forming networks.
We propose a new method of canonically representing heterogeneous time-evolving activities as a temporally parameterized network model.
arXiv Detail & Related papers (2020-06-06T07:28:03Z) - Learn to Predict Sets Using Feed-Forward Neural Networks [63.91494644881925]
This paper addresses the task of set prediction using deep feed-forward neural networks.
We present a novel approach for learning to predict sets with unknown permutation and cardinality.
We demonstrate the validity of our set formulations on relevant vision problems.
arXiv Detail & Related papers (2020-01-30T01:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.