Deep Neural Networks and Neuro-Fuzzy Networks for Intellectual Analysis
of Economic Systems
- URL: http://arxiv.org/abs/2011.05588v1
- Date: Wed, 11 Nov 2020 06:21:08 GMT
- Title: Deep Neural Networks and Neuro-Fuzzy Networks for Intellectual Analysis
of Economic Systems
- Authors: Alexey Averkin and Sergey Yarushev
- Abstract summary: We consider approaches for time series forecasting based on deep neural networks and neuro-fuzzy nets.
This paper presents also an overview of approaches for incorporating rule-based methodology into deep learning neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In tis paper we consider approaches for time series forecasting based on deep
neural networks and neuro-fuzzy nets. Also, we make short review of researches
in forecasting based on various models of ANFIS models. Deep Learning has
proven to be an effective method for making highly accurate predictions from
complex data sources. Also, we propose our models of DL and Neuro-Fuzzy
Networks for this task. Finally, we show possibility of using these models for
data science tasks. This paper presents also an overview of approaches for
incorporating rule-based methodology into deep learning neural networks.
Related papers
- Convergence Analysis for Deep Sparse Coding via Convolutional Neural Networks [7.956678963695681]
We introduce a novel class of Deep Sparse Coding (DSC) models.
We derive convergence rates for CNNs in their ability to extract sparse features.
Inspired by the strong connection between sparse coding and CNNs, we explore training strategies to encourage neural networks to learn more sparse features.
arXiv Detail & Related papers (2024-08-10T12:43:55Z) - A Survey on Statistical Theory of Deep Learning: Approximation, Training Dynamics, and Generative Models [13.283281356356161]
We review the literature on statistical theories of neural networks from three perspectives.
Results on excess risks for neural networks are reviewed.
Papers that attempt to answer how the neural network finds the solution that can generalize well on unseen data'' are reviewed.
arXiv Detail & Related papers (2024-01-14T02:30:19Z) - Rule-Extraction Methods From Feedforward Neural Networks: A Systematic
Literature Review [1.1510009152620668]
Rules offer a transparent and intuitive means of explaining neural networks.
The study specifically addresses feedforward networks with supervised learning and crisp rules.
Future work can extend to other network types, machine learning methods, and fuzzy rule extraction.
arXiv Detail & Related papers (2023-12-20T09:40:07Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Linear Leaky-Integrate-and-Fire Neuron Model Based Spiking Neural
Networks and Its Mapping Relationship to Deep Neural Networks [7.840247953745616]
Spiking neural networks (SNNs) are brain-inspired machine learning algorithms with merits such as biological plausibility and unsupervised learning capability.
This paper establishes a precise mathematical mapping between the biological parameters of the Linear Leaky-Integrate-and-Fire model (LIF)/SNNs and the parameters of ReLU-AN/Deep Neural Networks (DNNs)
arXiv Detail & Related papers (2022-05-31T17:02:26Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - SOCRATES: Towards a Unified Platform for Neural Network Analysis [7.318255652722096]
We aim to build a unified framework for developing techniques to analyze neural networks.
We develop a platform called SOCRATES which supports a standardized format for a variety of neural network models.
Experiment results show that our platform can handle a wide range of networks models and properties.
arXiv Detail & Related papers (2020-07-22T05:18:57Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.