An improved LogNNet classifier for IoT application
- URL: http://arxiv.org/abs/2105.14412v1
- Date: Sun, 30 May 2021 02:12:45 GMT
- Title: An improved LogNNet classifier for IoT application
- Authors: Hanif Heidari and Andrei Velichko
- Abstract summary: This paper proposes a feed forward LogNNet neural network which uses a semi-linear Henon type discrete chaotic map to classify MNIST-10 dataset.
It is shown that there exists a direct relation between the value of entropy and accuracy of the classification.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The internet of things devices suffer of low memory while good accuracy is
needed. Designing suitable algorithms is vital in this subject. This paper
proposes a feed forward LogNNet neural network which uses a semi-linear Henon
type discrete chaotic map to classify MNIST-10 dataset. The model is composed
of reservoir part and trainable classifier. The aim of reservoir part is
transforming the inputs to maximize the classification accuracy using a special
matrix filing method and a time series generated by the chaotic map. The
parameters of the chaotic map are optimized using particle swarm optimization
with random immigrants. The results show that the proposed LogNNet/Henon
classifier has higher accuracy and same RAM saving comparable to the original
version of LogNNet and has broad prospects for implementation in IoT devices.
In addition, the relation between the entropy and accuracy of the
classification is investigated. It is shown that there exists a direct relation
between the value of entropy and accuracy of the classification.
Related papers
- Generalizing Backpropagation for Gradient-Based Interpretability [103.2998254573497]
We show that the gradient of a model is a special case of a more general formulation using semirings.
This observation allows us to generalize the backpropagation algorithm to efficiently compute other interpretable statistics.
arXiv Detail & Related papers (2023-07-06T15:19:53Z) - Optimality of Message-Passing Architectures for Sparse Graphs [13.96547777184641]
We study the node classification problem on feature-decorated graphs in the sparse setting, i.e., when the expected degree of a node is $O(1)$ in the number of nodes.
We introduce a notion of Bayes optimality for node classification tasks, called local Bayes optimality.
We show that the optimal message-passing architecture interpolates between a standard in the regime of low graph signal and a typical in the regime of high graph signal.
arXiv Detail & Related papers (2023-05-17T17:31:20Z) - SreaMRAK a Streaming Multi-Resolution Adaptive Kernel Algorithm [60.61943386819384]
Existing implementations of KRR require that all the data is stored in the main memory.
We propose StreaMRAK - a streaming version of KRR.
We present a showcase study on two synthetic problems and the prediction of the trajectory of a double pendulum.
arXiv Detail & Related papers (2021-08-23T21:03:09Z) - Motor Imagery Classification based on CNN-GRU Network with
Spatio-Temporal Feature Representation [22.488536453952964]
Recently various deep neural networks have been applied to electroencephalogram (EEG) signal.
EEG is a brain signal that can be acquired in a non-invasive way and has a high temporal resolution.
As the EEG signal has a high dimension of classification feature space, appropriate feature extraction methods are needed to improve performance.
arXiv Detail & Related papers (2021-07-15T01:05:38Z) - Scalable Optimal Transport in High Dimensions for Graph Distances,
Embedding Alignment, and More [7.484063729015126]
We propose two effective log-linear time approximations of the cost matrix for optimal transport.
These approximations enable general log-linear time algorithms for entropy-regularized OT that perform well even for the complex, high-dimensional spaces.
For graph distance regression we propose the graph transport network (GTN), which combines graph neural networks (GNNs) with enhanced Sinkhorn.
arXiv Detail & Related papers (2021-07-14T17:40:08Z) - Learning Optical Flow from a Few Matches [67.83633948984954]
We show that the dense correlation volume representation is redundant and accurate flow estimation can be achieved with only a fraction of elements in it.
Experiments show that our method can reduce computational cost and memory use significantly, while maintaining high accuracy.
arXiv Detail & Related papers (2021-04-05T21:44:00Z) - Improving predictions of Bayesian neural nets via local linearization [79.21517734364093]
We argue that the Gauss-Newton approximation should be understood as a local linearization of the underlying Bayesian neural network (BNN)
Because we use this linearized model for posterior inference, we should also predict using this modified model instead of the original one.
We refer to this modified predictive as "GLM predictive" and show that it effectively resolves common underfitting problems of the Laplace approximation.
arXiv Detail & Related papers (2020-08-19T12:35:55Z) - A Neural Network Approach for Online Nonlinear Neyman-Pearson
Classification [3.6144103736375857]
We propose a novel Neyman-Pearson (NP) classifier that is both online and nonlinear as the first time in the literature.
The proposed classifier operates on a binary labeled data stream in an online manner, and maximizes the detection power about a user-specified and controllable false positive rate.
Our algorithm is appropriate for large scale data applications and provides a decent false positive rate controllability with real time processing.
arXiv Detail & Related papers (2020-06-14T20:00:25Z) - Neural Network for Low-Memory IoT Devices and MNIST Image Recognition
Using Kernels Based on Logistic Map [0.0]
This study presents a neural network which uses filters based on logistic mapping (LogNNet)
LogNNet has a feedforward network structure, but possesses the properties of reservoir neural networks.
The proposed neural network can be used in implementations of artificial intelligence based on constrained devices with limited memory.
arXiv Detail & Related papers (2020-06-04T12:55:17Z) - RAIN: A Simple Approach for Robust and Accurate Image Classification
Networks [156.09526491791772]
It has been shown that the majority of existing adversarial defense methods achieve robustness at the cost of sacrificing prediction accuracy.
This paper proposes a novel preprocessing framework, which we term Robust and Accurate Image classificatioN(RAIN)
RAIN applies randomization over inputs to break the ties between the model forward prediction path and the backward gradient path, thus improving the model robustness.
We conduct extensive experiments on the STL10 and ImageNet datasets to verify the effectiveness of RAIN against various types of adversarial attacks.
arXiv Detail & Related papers (2020-04-24T02:03:56Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.