A Quarter of a Century of Neuromorphic Architectures on FPGAs -- an Overview
- URL: http://arxiv.org/abs/2502.20415v3
- Date: Wed, 23 Apr 2025 12:51:32 GMT
- Title: A Quarter of a Century of Neuromorphic Architectures on FPGAs -- an Overview
- Authors: Wiktor J. Szczerek, Artur Podobas,
- Abstract summary: Neuromorphic computing is a new discipline of computer science, where the principles of biological brain's computation and memory are used to create a new way of processing information.<n>The Field Programmable Gate Arrays (FPGAs) are a frequent choice, due to their inherent flexibility, allowing the researchers to easily design hardware neuromorphic architecture (NMAs)<n>This paper presents an overview of digital NMAs implemented on FPGAs, with a goal of providing useful references to the researchers interested in digital neuromorphic systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neuromorphic computing is a relatively new discipline of computer science, where the principles of biological brain's computation and memory are used to create a new way of processing information, based on networks of spiking neurons. Those networks can be implemented as both analog and digital implementations, where for the latter, the Field Programmable Gate Arrays (FPGAs) are a frequent choice, due to their inherent flexibility, allowing the researchers to easily design hardware neuromorphic architecture (NMAs). Moreover, digital NMAs show good promise in simulating various spiking neural networks because of their inherent accuracy and resilience to noise, as opposed to analog implementations. This paper presents an overview of digital NMAs implemented on FPGAs, with a goal of providing useful references to various architectural design choices to the researchers interested in digital neuromorphic systems. We present a taxonomy of NMAs that highlights groups of distinct architectural features, their advantages and disadvantages and identify trends and predictions for the future of those architectures.
Related papers
- Graph Foundation Models for Recommendation: A Comprehensive Survey [55.70529188101446]
Large language models (LLMs) are designed to process and comprehend natural language, making both approaches highly effective and widely adopted.<n>Recent research has focused on graph foundation models (GFMs)<n>GFMs integrate the strengths of GNNs and LLMs to model complex RS problems more efficiently by leveraging the graph-based structure of user-item relationships alongside textual understanding.
arXiv Detail & Related papers (2025-02-12T12:13:51Z) - A Realistic Simulation Framework for Analog/Digital Neuromorphic Architectures [73.65190161312555]
ARCANA is a software spiking neural network simulator designed to account for the properties of mixed-signal neuromorphic circuits.
We show how the results obtained provide a reliable estimate of the behavior of the spiking neural network trained in software, once deployed in hardware.
arXiv Detail & Related papers (2024-09-23T11:16:46Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - GPU-RANC: A CUDA Accelerated Simulation Framework for Neuromorphic Architectures [1.3401966602181168]
We introduce the GPU-based implementation of Reconfigurable Architecture for Neuromorphic Computing (RANC)
We demonstrate up to 780 times speedup compared to serial version of the RANC simulator based on a 512 neuromorphic core MNIST inference application.
arXiv Detail & Related papers (2024-04-24T21:08:21Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Harnessing FPGA Technology for Enhanced Biomedical Computation [0.0]
This research delves into sophisticated neural network frameworks like CNN, Recurrent Neural Networks (RNN), Long Short-Term Memory Networks (LSTMs), and Deep Belief Networks (DBNs)
By evaluating performance indicators like latency and throughput, we showcase the efficacy of FPGAs in advanced biomedical computing.
arXiv Detail & Related papers (2023-11-21T08:51:58Z) - Equivariant Matrix Function Neural Networks [1.8717045355288808]
We introduce Matrix Function Neural Networks (MFNs), a novel architecture that parameterizes non-local interactions through analytic matrix equivariant functions.
MFNs is able to capture intricate non-local interactions in quantum systems, paving the way to new state-of-the-art force fields.
arXiv Detail & Related papers (2023-10-16T14:17:00Z) - A Survey of Spiking Neural Network Accelerator on FPGA [0.0]
We collect the recent widely-used spiking neuron models, network structures, and signal encoding formats, followed by the enumeration of related hardware design schemes for FPGA-based SNN implementations.
Based on that, we discuss the actual acceleration potential of implementing SNN on FPGA.
arXiv Detail & Related papers (2023-07-08T06:02:12Z) - NeuroBench: A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems [50.076028127394366]
We present NeuroBench: a benchmark framework for neuromorphic computing algorithms and systems.<n>NeuroBench is a collaboratively-designed effort from an open community of researchers across industry and academia.
arXiv Detail & Related papers (2023-04-10T15:12:09Z) - Composing Task Knowledge with Modular Successor Feature Approximators [60.431769158952626]
We present a novel neural network architecture, "Modular Successor Feature Approximators" (MSFA)
MSFA is able to better generalize compared to baseline architectures for learning SFs and modular architectures.
arXiv Detail & Related papers (2023-01-28T23:04:07Z) - A Compositional Approach to Creating Architecture Frameworks with an
Application to Distributed AI Systems [16.690434072032176]
We show how compositional thinking can provide rules for the creation and management of architectural frameworks for complex systems.
The aim of the paper is not to provide viewpoints or architecture models specific to AI systems, but instead to provide guidelines on how a consistent framework can be built up with existing, or newly created, viewpoints.
arXiv Detail & Related papers (2022-12-27T18:05:02Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Bottom-up and top-down approaches for the design of neuromorphic
processing systems: Tradeoffs and synergies between natural and artificial
intelligence [3.874729481138221]
Moore's law has driven exponential computing power expectations, its nearing end calls for new avenues for improving the overall system performance.
One of these avenues is the exploration of alternative brain-inspired computing architectures that aim at achieving the flexibility and computational efficiency of biological neural processing systems.
We provide a comprehensive overview of the field, highlighting the different levels of granularity at which this paradigm shift is realized.
arXiv Detail & Related papers (2021-06-02T16:51:45Z) - NeuroXplorer 1.0: An Extensible Framework for Architectural Exploration
with Spiking Neural Networks [3.9121275263540087]
We present NeuroXplorer, a framework that is based on a generalized template for modeling a neuromorphic architecture.
NeuroXplorer can perform both low-level cycle-accurate architectural simulations and high-level analysis with data-flow abstractions.
We demonstrate the architectural exploration capabilities of NeuroXplorer through case studies with many state-of-the-art machine learning models.
arXiv Detail & Related papers (2021-05-04T23:31:11Z) - A deep learning theory for neural networks grounded in physics [2.132096006921048]
We argue that building large, fast and efficient neural networks on neuromorphic architectures requires rethinking the algorithms to implement and train them.
Our framework applies to a very broad class of models, namely systems whose state or dynamics are described by variational equations.
arXiv Detail & Related papers (2021-03-18T02:12:48Z) - RANC: Reconfigurable Architecture for Neuromorphic Computing [1.1534748916340396]
We present RANC: a Reconfigurable Architecture for Neuromorphic Computing.
RANC enables rapid experimentation with neuromorphic architectures in both software via C++ simulation and hardware via FPGA emulation.
We show the utility of the RANC ecosystem by showing its ability to recreate behavior of the IBM's TrueNorth.
We demonstrate a neuromorphic architecture that scales to emulating 259K distinct neurons and 73.3M distinct synapses.
arXiv Detail & Related papers (2020-11-01T20:29:52Z) - Benchmarking Graph Neural Networks [75.42159546060509]
Graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs.
For any successful field to become mainstream and reliable, benchmarks must be developed to quantify progress.
GitHub repository has reached 1,800 stars and 339 forks, which demonstrates the utility of the proposed open-source framework.
arXiv Detail & Related papers (2020-03-02T15:58:46Z) - Is my Neural Network Neuromorphic? Taxonomy, Recent Trends and Future
Directions in Neuromorphic Engineering [2.179313476241343]
We see that there is no clear consensus but each system has one or more of the following features.
We show brain-machine interfaces as a potential task that fulfils all the criteria of such benchmarks.
arXiv Detail & Related papers (2020-02-27T07:10:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.