Properties and Potential Applications of Random Functional-Linked Types
of Neural Networks
- URL: http://arxiv.org/abs/2304.00957v1
- Date: Mon, 3 Apr 2023 13:25:22 GMT
- Title: Properties and Potential Applications of Random Functional-Linked Types
of Neural Networks
- Authors: Guang-Yong Chen, Yong-Hang Yu, Min Gan, C. L. Philip Chen, Wenzhong
Guo
- Abstract summary: Random functional-linked neural networks (RFLNNs) offer an alternative way of learning in deep structure.
This paper gives some insights into the properties of RFLNNs from the viewpoints of frequency domain.
We propose a method to generate a BLS network with better performance, and design an efficient algorithm for solving Poison's equation.
- Score: 81.56822938033119
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Random functional-linked types of neural networks (RFLNNs), e.g., the extreme
learning machine (ELM) and broad learning system (BLS), which avoid suffering
from a time-consuming training process, offer an alternative way of learning in
deep structure. The RFLNNs have achieved excellent performance in various
classification and regression tasks, however, the properties and explanations
of these networks are ignored in previous research. This paper gives some
insights into the properties of RFLNNs from the viewpoints of frequency domain,
and discovers the presence of frequency principle in these networks, that is,
they preferentially capture low-frequencies quickly and then fit the high
frequency components during the training process. These findings are valuable
for understanding the RFLNNs and expanding their applications. Guided by the
frequency principle, we propose a method to generate a BLS network with better
performance, and design an efficient algorithm for solving Poison's equation in
view of the different frequency principle presenting in the Jacobi iterative
method and BLS network.
Related papers
- Efficient Training of Deep Neural Operator Networks via Randomized Sampling [0.0]
Deep operator network (DeepNet) has demonstrated success in the real-time prediction of complex dynamics across various scientific and engineering applications.
We introduce a random sampling technique to be adopted the training of DeepONet, aimed at improving generalization ability of the model, while significantly reducing computational time.
Our results indicate that incorporating randomization in the trunk network inputs during training enhances the efficiency and robustness of DeepONet, offering a promising avenue for improving the framework's performance in modeling complex physical systems.
arXiv Detail & Related papers (2024-09-20T07:18:31Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Deep Q-network using reservoir computing with multi-layered readout [0.0]
Recurrent neural network (RNN) based reinforcement learning (RL) is used for learning context-dependent tasks.
An approach with replay memory introducing reservoir computing has been proposed, which trains an agent without BPTT.
This paper shows that the performance of this method improves by using a multi-layered neural network for the readout layer.
arXiv Detail & Related papers (2022-03-03T00:32:55Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Online learning of windmill time series using Long Short-term Cognitive
Networks [58.675240242609064]
The amount of data generated on windmill farms makes online learning the most viable strategy to follow.
We use Long Short-term Cognitive Networks (LSTCNs) to forecast windmill time series in online settings.
Our approach reported the lowest forecasting errors with respect to a simple RNN, a Long Short-term Memory, a Gated Recurrent Unit, and a Hidden Markov Model.
arXiv Detail & Related papers (2021-07-01T13:13:24Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - SRDCNN: Strongly Regularized Deep Convolution Neural Network
Architecture for Time-series Sensor Signal Classification Tasks [4.950427992960756]
We present SRDCNN: Strongly Regularized Deep Convolution Neural Network (DCNN) based deep architecture to perform time series classification tasks.
The novelty of the proposed approach is that the network weights are regularized by both L1 and L2 norm penalties.
arXiv Detail & Related papers (2020-07-14T08:42:39Z) - R-FORCE: Robust Learning for Random Recurrent Neural Networks [6.285241353736006]
We propose a robust training method to enhance robustness of RRNN.
FORCE learning approach was shown to be applicable even for the challenging task of target-learning.
Our experiments indicate that R-FORCE facilitates significantly more stable and accurate target-learning for a wide class of RRNN.
arXiv Detail & Related papers (2020-03-25T22:08:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.