Neural Network for Low-Memory IoT Devices and MNIST Image Recognition
Using Kernels Based on Logistic Map
- URL: http://arxiv.org/abs/2006.02824v2
- Date: Fri, 4 Sep 2020 03:42:46 GMT
- Title: Neural Network for Low-Memory IoT Devices and MNIST Image Recognition
Using Kernels Based on Logistic Map
- Authors: Andrei Velichko
- Abstract summary: This study presents a neural network which uses filters based on logistic mapping (LogNNet)
LogNNet has a feedforward network structure, but possesses the properties of reservoir neural networks.
The proposed neural network can be used in implementations of artificial intelligence based on constrained devices with limited memory.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study presents a neural network which uses filters based on logistic
mapping (LogNNet). LogNNet has a feedforward network structure, but possesses
the properties of reservoir neural networks. The input weight matrix, set by a
recurrent logistic mapping, forms the kernels that transform the input space to
the higher-dimensional feature space. The most effective recognition of a
handwritten digit from MNIST-10 occurs under chaotic behavior of the logistic
map. The correlation of classification accuracy with the value of the Lyapunov
exponent was obtained. An advantage of LogNNet implementation on IoT devices is
the significant savings in memory used. At the same time, LogNNet has a simple
algorithm and performance indicators comparable to those of the best
resource-efficient algorithms available at the moment. The presented network
architecture uses an array of weights with a total memory size from 1 to 29 kB
and achieves a classification accuracy of 80.3-96.3%. Memory is saved due to
the processor, which sequentially calculates the required weight coefficients
during the network operation using the analytical equation of the logistic
mapping. The proposed neural network can be used in implementations of
artificial intelligence based on constrained devices with limited memory, which
are integral blocks for creating ambient intelligence in modern IoT
environments. From a research perspective, LogNNet can contribute to the
understanding of the fundamental issues of the influence of chaos on the
behavior of reservoir-type neural networks.
Related papers
- EvSegSNN: Neuromorphic Semantic Segmentation for Event Data [0.6138671548064356]
EvSegSNN is a biologically plausible encoder-decoder U-shaped architecture relying on Parametric Leaky Integrate and Fire neurons.
We introduce an end-to-end biologically inspired semantic segmentation approach by combining Spiking Neural Networks with event cameras.
Experiments conducted on DDD17 demonstrate that EvSegSNN outperforms the closest state-of-the-art model in terms of MIoU.
arXiv Detail & Related papers (2024-06-20T10:36:24Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Set-based Neural Network Encoding Without Weight Tying [91.37161634310819]
We propose a neural network weight encoding method for network property prediction.
Our approach is capable of encoding neural networks in a model zoo of mixed architecture.
We introduce two new tasks for neural network property prediction: cross-dataset and cross-architecture.
arXiv Detail & Related papers (2023-05-26T04:34:28Z) - CondenseNeXt: An Ultra-Efficient Deep Neural Network for Embedded
Systems [0.0]
A Convolutional Neural Network (CNN) is a class of Deep Neural Network (DNN) widely used in the analysis of visual images captured by an image sensor.
In this paper, we propose a neoteric variant of deep convolutional neural network architecture to ameliorate the performance of existing CNN architectures for real-time inference on embedded systems.
arXiv Detail & Related papers (2021-12-01T18:20:52Z) - Logsig-RNN: a novel network for robust and efficient skeleton-based
action recognition [3.775860173040509]
We propose a novel module, namely Logsig-RNN, which is the combination of the log-native layer and recurrent type neural networks (RNNs)
In particular, we achieve the state-of-the-art accuracy on Chalearn2013 gesture data by combining simple path transformation layers with the Logsig-RNN.
arXiv Detail & Related papers (2021-10-25T14:47:15Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - An improved LogNNet classifier for IoT application [0.0]
This paper proposes a feed forward LogNNet neural network which uses a semi-linear Henon type discrete chaotic map to classify MNIST-10 dataset.
It is shown that there exists a direct relation between the value of entropy and accuracy of the classification.
arXiv Detail & Related papers (2021-05-30T02:12:45Z) - Recognition of handwritten MNIST digits on low-memory 2 Kb RAM Arduino
board using LogNNet reservoir neural network [0.0]
The presented algorithm for recognizing handwritten digits of the MNIST database, created on the LogNNet reservoir neural network, reaches the recognition accuracy of 82%.
The simple structure of the algorithm, with appropriate training, can be adapted for wide practical application, for example, for creating mobile biosensors for early diagnosis of adverse events in medicine.
arXiv Detail & Related papers (2021-04-20T18:16:23Z) - Topological obstructions in neural networks learning [67.8848058842671]
We study global properties of the loss gradient function flow.
We use topological data analysis of the loss function and its Morse complex to relate local behavior along gradient trajectories with global properties of the loss surface.
arXiv Detail & Related papers (2020-12-31T18:53:25Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.