Recognition of handwritten MNIST digits on low-memory 2 Kb RAM Arduino
board using LogNNet reservoir neural network
- URL: http://arxiv.org/abs/2105.02953v1
- Date: Tue, 20 Apr 2021 18:16:23 GMT
- Title: Recognition of handwritten MNIST digits on low-memory 2 Kb RAM Arduino
board using LogNNet reservoir neural network
- Authors: Y. A. Izotov, A. A. Velichko, A. A. Ivshin and R. E. Novitskiy
- Abstract summary: The presented algorithm for recognizing handwritten digits of the MNIST database, created on the LogNNet reservoir neural network, reaches the recognition accuracy of 82%.
The simple structure of the algorithm, with appropriate training, can be adapted for wide practical application, for example, for creating mobile biosensors for early diagnosis of adverse events in medicine.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The presented compact algorithm for recognizing handwritten digits of the
MNIST database, created on the LogNNet reservoir neural network, reaches the
recognition accuracy of 82%. The algorithm was tested on a low-memory Arduino
board with 2 Kb static RAM low-power microcontroller. The dependences of the
accuracy and time of image recognition on the number of neurons in the
reservoir have been investigated. The memory allocation demonstrates that the
algorithm stores all the necessary information in RAM without using additional
data storage, and operates with original images without preliminary processing.
The simple structure of the algorithm, with appropriate training, can be
adapted for wide practical application, for example, for creating mobile
biosensors for early diagnosis of adverse events in medicine. The study results
are important for the implementation of artificial intelligence on peripheral
constrained IoT devices and for edge computing.
Related papers
- Enhancing Length Extrapolation in Sequential Models with Pointer-Augmented Neural Memory [66.88278207591294]
We propose Pointer-Augmented Neural Memory (PANM) to help neural networks understand and apply symbol processing to new, longer sequences of data.
PANM integrates an external neural memory that uses novel physical addresses and pointer manipulation techniques to mimic human and computer symbol processing abilities.
arXiv Detail & Related papers (2024-04-18T03:03:46Z) - Pruning random resistive memory for optimizing analogue AI [54.21621702814583]
AI models present unprecedented challenges to energy consumption and environmental sustainability.
One promising solution is to revisit analogue computing, a technique that predates digital computing.
Here, we report a universal solution, software-hardware co-design using structural plasticity-inspired edge pruning.
arXiv Detail & Related papers (2023-11-13T08:59:01Z) - Pex: Memory-efficient Microcontroller Deep Learning through Partial
Execution [11.336229510791481]
We discuss a novel execution paradigm for microcontroller deep learning.
It modifies the execution of neural networks to avoid materialising full buffers in memory.
This is achieved by exploiting the properties of operators, which can consume/produce a fraction of their input/output at a time.
arXiv Detail & Related papers (2022-11-30T18:47:30Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - Experimentally realized memristive memory augmented neural network [0.0]
Lifelong on-device learning is a key challenge for machine intelligence.
Memory augmented neural network has been proposed to achieve the goal, but the memory module has to be stored in an off-chip memory.
We implement the entire memory augmented neural network architecture in a fully integrated memristive crossbar platform.
arXiv Detail & Related papers (2022-04-15T11:52:30Z) - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning [72.80896338009579]
We find that the memory bottleneck is due to the imbalanced memory distribution in convolutional neural network (CNN) designs.
We propose a generic patch-by-patch inference scheduling, which significantly cuts down the peak memory.
We automate the process with neural architecture search to jointly optimize the neural architecture and inference scheduling, leading to MCUNetV2.
arXiv Detail & Related papers (2021-10-28T17:58:45Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Binarized Neural Networks for Resource-Constrained On-Device Gait
Identification [1.933681537640272]
We show that binarized neural networks can act as robust discriminators, maintaining both an acceptable level of accuracy while also dramatically decreasing memory requirements.
We propose BiPedalNet, a compact CNN that nearly matches the state-of-the-art on the Padova gait dataset, with only 1/32 of the memory overhead.
arXiv Detail & Related papers (2021-03-30T18:29:23Z) - Robust High-dimensional Memory-augmented Neural Networks [13.82206983716435]
Memory-augmented neural networks enhance neural networks with an explicit memory to overcome these issues.
Access to this explicit memory occurs via soft read and write operations involving every individual memory entry.
We propose a robust architecture that employs a computational memory unit as the explicit memory performing analog in-memory computation on high-dimensional (HD) vectors.
arXiv Detail & Related papers (2020-10-05T12:01:56Z) - Neural Network for Low-Memory IoT Devices and MNIST Image Recognition
Using Kernels Based on Logistic Map [0.0]
This study presents a neural network which uses filters based on logistic mapping (LogNNet)
LogNNet has a feedforward network structure, but possesses the properties of reservoir neural networks.
The proposed neural network can be used in implementations of artificial intelligence based on constrained devices with limited memory.
arXiv Detail & Related papers (2020-06-04T12:55:17Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.