Floating-Point Multiplication Using Neuromorphic Computing
- URL: http://arxiv.org/abs/2008.13245v1
- Date: Sun, 30 Aug 2020 19:07:14 GMT
- Title: Floating-Point Multiplication Using Neuromorphic Computing
- Authors: Karn Dubey and Urja Kothari and Shrisha Rao
- Abstract summary: We describe a neuromorphic system that performs IEEE 754-compliant floating-point multiplication.
We study the effect of the number of neurons per bit on accuracy and bit error rate, and estimate the optimal number of neurons needed for each component.
- Score: 3.5450828190071655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neuromorphic computing describes the use of VLSI systems to mimic
neuro-biological architectures and is also looked at as a promising alternative
to the traditional von Neumann architecture. Any new computing architecture
would need a system that can perform floating-point arithmetic. In this paper,
we describe a neuromorphic system that performs IEEE 754-compliant
floating-point multiplication. The complex process of multiplication is divided
into smaller sub-tasks performed by components Exponent Adder, Bias Subtractor,
Mantissa Multiplier and Sign OF/UF. We study the effect of the number of
neurons per bit on accuracy and bit error rate, and estimate the optimal number
of neurons needed for each component.
Related papers
- Expressive Power of ReLU and Step Networks under Floating-Point Operations [11.29958155597398]
We show that neural networks using a binary threshold unit or ReLU can memorize any finite input/output pairs.
We also show similar results on memorization and universal approximation when floating-point operations use finite bits for both significand and exponent.
arXiv Detail & Related papers (2024-01-26T05:59:40Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Splitting physics-informed neural networks for inferring the dynamics of
integer- and fractional-order neuron models [0.0]
We introduce a new approach for solving forward systems of differential equations using a combination of splitting methods and physics-informed neural networks (PINNs)
The proposed method, splitting PINN, effectively addresses the challenge of applying PINNs to forward dynamical systems.
arXiv Detail & Related papers (2023-04-26T00:11:00Z) - Encoding Integers and Rationals on Neuromorphic Computers using Virtual
Neuron [0.0]
We present the virtual neuron as an encoding mechanism for integers and rational numbers.
We show that it can perform an addition operation using 23 nJ of energy on average using a mixed-signal memristor-based neuromorphic processor.
arXiv Detail & Related papers (2022-08-15T23:18:26Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Separation of Memory and Processing in Dual Recurrent Neural Networks [0.0]
We explore a neural network architecture that stacks a recurrent layer and a feedforward layer that is also connected to the input.
When noise is introduced into the activation function of the recurrent units, these neurons are forced into a binary activation regime that makes the networks behave much as finite automata.
arXiv Detail & Related papers (2020-05-17T11:38:42Z) - Neural Arithmetic Units [84.65228064780744]
Neural networks can approximate complex functions, but they struggle to perform exact arithmetic operations over real numbers.
We present two new neural network components: the Neural Addition Unit (NAU), which can learn exact addition and subtraction, and the Neural multiplication Unit (NMU), which can multiply subsets of a vector.
Our proposed units NAU and NMU, compared with previous neural units, converge more consistently, have fewer parameters, learn faster, can converge for larger hidden sizes, obtain sparse and meaningful weights, and can extrapolate to negative and small values.
arXiv Detail & Related papers (2020-01-14T19:35:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.