Integrate-and-Fire Neurons for Low-Powered Pattern Recognition
- URL: http://arxiv.org/abs/2106.14596v1
- Date: Mon, 28 Jun 2021 12:08:00 GMT
- Title: Integrate-and-Fire Neurons for Low-Powered Pattern Recognition
- Authors: Florian Bacho and Dominique Chu
- Abstract summary: We introduce a low-powered neuron model called Integrate-and-Fire which exploits the charge and discharge properties of the capacitor.
Using parallel and series RC circuits, we developed a trainable neuron model that can be expressed in a recurrent form.
This paper is the full text of the research, presented at the 20th International Conference on Artificial Intelligence and Soft Computing Web System (ICAISC 2021)
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Embedded systems acquire information about the real world from sensors and
process it to make decisions and/or for transmission. In some situations, the
relationship between the data and the decision is complex and/or the amount of
data to transmit is large (e.g. in biologgers). Artificial Neural Networks
(ANNs) can efficiently detect patterns in the input data which makes them
suitable for decision making or compression of information for data
transmission. However, ANNs require a substantial amount of energy which
reduces the lifetime of battery-powered devices. Therefore, the use of Spiking
Neural Networks can improve such systems by providing a way to efficiently
process sensory data without being too energy-consuming. In this work, we
introduce a low-powered neuron model called Integrate-and-Fire which exploits
the charge and discharge properties of the capacitor. Using parallel and series
RC circuits, we developed a trainable neuron model that can be expressed in a
recurrent form. Finally, we trained its simulation with an artificially
generated dataset of dog postures and implemented it as hardware that showed
promising energetic properties. This paper is the full text of the research,
presented at the 20th International Conference on Artificial Intelligence and
Soft Computing Web System (ICAISC 2021)
Related papers
- Data-Driven Fire Modeling: Learning First Arrival Times and Model Parameters with Neural Networks [12.416949154231714]
We investigate the ability of neural networks to parameterize dynamics in fire science.
In particular, we investigate neural networks that map five key parameters in fire spread to the first arrival time.
For the inverse problem, we quantify the network's sensitivity in estimating each of the key parameters.
arXiv Detail & Related papers (2024-08-16T19:54:41Z) - Automatic AI Model Selection for Wireless Systems: Online Learning via Digital Twinning [50.332027356848094]
AI-based applications are deployed at intelligent controllers to carry out functionalities like scheduling or power control.
The mapping between context and AI model parameters is ideally done in a zero-shot fashion.
This paper introduces a general methodology for the online optimization of AMS mappings.
arXiv Detail & Related papers (2024-06-22T11:17:50Z) - Neuromorphic Split Computing with Wake-Up Radios: Architecture and Design via Digital Twinning [97.99077847606624]
This work proposes a novel architecture that integrates a wake-up radio mechanism within a split computing system consisting of remote, wirelessly connected, NPUs.
A key challenge in the design of a wake-up radio-based neuromorphic split computing system is the selection of thresholds for sensing, wake-up signal detection, and decision making.
arXiv Detail & Related papers (2024-04-02T10:19:04Z) - A Plug-in Tiny AI Module for Intelligent and Selective Sensor Data
Transmission [10.174575604689391]
We propose a novel sensing module to equip sensing frameworks with intelligent data transmission capabilities.
We integrate a highly efficient machine learning model placed near the sensor.
This model provides prompt feedback for the sensing system to transmit only valuable data while discarding irrelevant information.
arXiv Detail & Related papers (2024-02-03T05:41:39Z) - WaLiN-GUI: a graphical and auditory tool for neuron-based encoding [73.88751967207419]
Neuromorphic computing relies on spike-based, energy-efficient communication.
We develop a tool to identify suitable configurations for neuron-based encoding of sample-based data into spike trains.
The WaLiN-GUI is provided open source and with documentation.
arXiv Detail & Related papers (2023-10-25T20:34:08Z) - DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous
spiking neural network processor [2.9175555050594975]
We present a brain-inspired platform for prototyping real-time event-based Spiking Neural Networks (SNNs)
The system proposed supports the direct emulation of dynamic and realistic neural processing phenomena such as short-term plasticity, NMDA gating, AMPA diffusion, homeostasis, spike frequency adaptation, conductance-based dendritic compartments and spike transmission delays.
The flexibility to emulate different biologically plausible neural networks, and the chip's ability to monitor both population and single neuron signals in real-time, allow to develop and validate complex models of neural processing for both basic research and edge-computing applications.
arXiv Detail & Related papers (2023-10-01T03:48:16Z) - A Spiking Neural Network based on Neural Manifold for Augmenting
Intracortical Brain-Computer Interface Data [5.039813366558306]
Brain-computer interfaces (BCIs) transform neural signals in the brain into in-structions to control external devices.
With the advent of advanced machine learning methods, the capability of brain-computer interfaces has been enhanced like never before.
Here, we use spiking neural networks (SNN) as data generators.
arXiv Detail & Related papers (2022-03-26T15:32:31Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - A reconfigurable neural network ASIC for detector front-end data
compression at the HL-LHC [0.40690419770123604]
A neural network autoencoder model can be implemented in a radiation tolerant ASIC to perform lossy data compression.
This is the first radiation tolerant on-detector ASIC implementation of a neural network that has been designed for particle physics applications.
arXiv Detail & Related papers (2021-05-04T18:06:23Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.