Analog Neural Computing with Super-resolution Memristor Crossbars
- URL: http://arxiv.org/abs/2105.04614v1
- Date: Mon, 10 May 2021 18:52:44 GMT
- Title: Analog Neural Computing with Super-resolution Memristor Crossbars
- Authors: A. P. James, L. O. Chua
- Abstract summary: Memristor crossbar arrays are used in a wide range of in-memory and neuromorphic computing applications.
This paper presents a technique to improve the resolution by building a super-resolution memristor crossbar with nodes having multiple memristors.
The wider the range and number of conductance values, the higher the crossbar's resolution.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Memristor crossbar arrays are used in a wide range of in-memory and
neuromorphic computing applications. However, memristor devices suffer from
non-idealities that result in the variability of conductive states, making
programming them to a desired analog conductance value extremely difficult as
the device ages. In theory, memristors can be a nonlinear programmable analog
resistor with memory properties that can take infinite resistive states. In
practice, such memristors are hard to make, and in a crossbar, it is confined
to a limited set of stable conductance values. The number of conductance levels
available for a node in the crossbar is defined as the crossbar's resolution.
This paper presents a technique to improve the resolution by building a
super-resolution memristor crossbar with nodes having multiple memristors to
generate r-simplicial sequence of unique conductance values. The wider the
range and number of conductance values, the higher the crossbar's resolution.
This is particularly useful in building analog neural network (ANN) layers,
which are proven to be one of the go-to approaches for forming a neural network
layer in implementing neuromorphic computations.
Related papers
- Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Resistive Memory-based Neural Differential Equation Solver for Score-based Diffusion Model [55.116403765330084]
Current AIGC methods, such as score-based diffusion, are still deficient in terms of rapidity and efficiency.
We propose a time-continuous and analog in-memory neural differential equation solver for score-based diffusion.
We experimentally validate our solution with 180 nm resistive memory in-memory computing macros.
arXiv Detail & Related papers (2024-04-08T16:34:35Z) - Locality-Aware Generalizable Implicit Neural Representation [54.93702310461174]
Generalizable implicit neural representation (INR) enables a single continuous function to represent multiple data instances.
We propose a novel framework for generalizable INR that combines a transformer encoder with a locality-aware INR decoder.
Our framework significantly outperforms previous generalizable INRs and validates the usefulness of the locality-aware latents for downstream tasks.
arXiv Detail & Related papers (2023-10-09T11:26:58Z) - Scaling Limits of Memristor-Based Routers for Asynchronous Neuromorphic
Systems [2.5264231114078353]
Multi-core neuromorphic systems typically use on-chip routers to transmit spikes among cores.
A promising alternative is to exploit the features of memristive crossbar arrays and use them as programmable switch-matrices that route spikes.
We study the challenges of memristive crossbar arrays, when used as routing channels to transmit spikes in asynchronous Spiking Neural Network (SNN) hardware.
arXiv Detail & Related papers (2023-07-16T17:50:24Z) - Capturing the Diffusive Behavior of the Multiscale Linear Transport
Equations by Asymptotic-Preserving Convolutional DeepONets [31.88833218777623]
We introduce two types of novel Asymptotic-Preserving Convolutional Deep Operator Networks (APCONs)
We propose a new architecture called Convolutional Deep Operator Networks, which employ multiple local convolution operations instead of a global heat kernel.
Our APCON methods possess a parameter count that is independent of the grid size and are capable of capturing the diffusive behavior of the linear transport problem.
arXiv Detail & Related papers (2023-06-28T03:16:45Z) - Sequence Modeling with Multiresolution Convolutional Memory [27.218134279968062]
We build a new building block for sequence modeling called a MultiresLayer.
The key component of our model is the multiresolution convolution, capturing multiscale trends in the input sequence.
Our model yields state-of-the-art performance on a number of sequence classification and autoregressive density estimation tasks.
arXiv Detail & Related papers (2023-05-02T17:50:54Z) - Reliability-Aware Deployment of DNNs on In-Memory Analog Computing
Architectures [0.0]
In-Memory Analog Computing (IMAC) circuits remove the need for signal converters by realizing both MVM and NLV operations in the analog domain.
We introduce a practical approach to deploy large matrices in deep neural networks (DNNs) onto multiple smaller IMAC subarrays to alleviate the impacts of noise and parasitics.
arXiv Detail & Related papers (2022-10-02T01:43:35Z) - Error Correction Code Transformer [92.10654749898927]
We propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths.
We encode each channel's output dimension to high dimension for better representation of the bits information to be processed separately.
The proposed approach demonstrates the extreme power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins at a fraction of their time complexity.
arXiv Detail & Related papers (2022-03-27T15:25:58Z) - Endurance-Aware Mapping of Spiking Neural Networks to Neuromorphic
Hardware [4.234079120512533]
Neuromorphic computing systems are embracing memristors to implement high density and low power synaptic storage as crossbar arrays in hardware.
Long bitlines and wordlines in a memristive crossbar are a major source of parasitic voltage drops, which create current asymmetry.
We propose eSpine, a technique to improve lifetime by incorporating the endurance variation within each crossbar in mapping machine learning workloads.
arXiv Detail & Related papers (2021-03-09T20:43:28Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - DHP: Differentiable Meta Pruning via HyperNetworks [158.69345612783198]
This paper introduces a differentiable pruning method via hypernetworks for automatic network pruning.
Latent vectors control the output channels of the convolutional layers in the backbone network and act as a handle for the pruning of the layers.
Experiments are conducted on various networks for image classification, single image super-resolution, and denoising.
arXiv Detail & Related papers (2020-03-30T17:59:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.