A Novel Implementation Methodology for Error Correction Codes on a
Neuromorphic Architecture
- URL: http://arxiv.org/abs/2306.04010v1
- Date: Tue, 6 Jun 2023 20:49:10 GMT
- Title: A Novel Implementation Methodology for Error Correction Codes on a
Neuromorphic Architecture
- Authors: Sahil Hassan, Parker Dattilo, Ali Akoglu
- Abstract summary: We propose a methodology to map the hard-decision class of decoder algorithms on a neuromorphic architecture.
We present the implementation of the Gallager B decoding algorithm on a TrueNorth-inspired architecture that is emulated on the Xilinx Zynq ZCU102 MPSoC.
- Score: 0.8021197489470758
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Internet of Things infrastructure connects a massive number of edge
devices with an increasing demand for intelligent sensing and inferencing
capability. Such data-sensitive functions necessitate energy-efficient and
programmable implementations of Error Correction Codes (ECC) and decoders. The
algorithmic flow of ECCs with concurrent accumulation and comparison types of
operations are innately exploitable by neuromorphic architectures for energy
efficient execution -- an area that is relatively unexplored outside of machine
learning applications. For the first time, we propose a methodology to map the
hard-decision class of decoder algorithms on a neuromorphic architecture. We
present the implementation of the Gallager B (GaB) decoding algorithm on a
TrueNorth-inspired architecture that is emulated on the Xilinx Zynq ZCU102
MPSoC. Over this reference implementation, we propose architectural
modifications at the neuron block level that result in a reduction of energy
consumption by 31% with a negligible increase in resource usage while achieving
the same error correction performance.
Related papers
- Architectures for Heterogeneous Quantum Error Correction Codes [13.488578754808676]
Heterogeneous architectures provide a clear path to universal logical computation.
We propose integrating the surface code and gross code using an ancilla bus for inter-code data movement.
We demonstrate physical qubit reductions of up to 6.42x when executing an algorithm to a specific logical error rate.
arXiv Detail & Related papers (2024-11-05T15:49:02Z) - Accelerating Error Correction Code Transformers [56.75773430667148]
We introduce a novel acceleration method for transformer-based decoders.
We achieve a 90% compression ratio and reduce arithmetic operation energy consumption by at least 224 times on modern hardware.
arXiv Detail & Related papers (2024-10-08T11:07:55Z) - Adaptive Error-Bounded Hierarchical Matrices for Efficient Neural Network Compression [0.0]
This paper introduces a dynamic, error-bounded hierarchical matrix (H-matrix) compression method tailored for Physics-Informed Neural Networks (PINNs)
The proposed approach reduces the computational complexity and memory demands of large-scale physics-based models while preserving the essential properties of the Neural Tangent Kernel (NTK)
Empirical results demonstrate that this technique outperforms traditional compression methods, such as Singular Value Decomposition (SVD), pruning, and quantization, by maintaining high accuracy and improving generalization capabilities.
arXiv Detail & Related papers (2024-09-11T05:55:51Z) - Deep Quantum Error Correction [73.54643419792453]
Quantum error correction codes (QECC) are a key component for realizing the potential of quantum computing.
In this work, we efficiently train novel emphend-to-end deep quantum error decoders.
The proposed method demonstrates the power of neural decoders for QECC by achieving state-of-the-art accuracy.
arXiv Detail & Related papers (2023-01-27T08:16:26Z) - Biologically Plausible Learning on Neuromorphic Hardware Architectures [27.138481022472]
Neuromorphic computing is an emerging paradigm that confronts this imbalance by computations directly in analog memories.
This work is the first to compare the impact of different learning algorithms on Compute-In-Memory-based hardware and vice versa.
arXiv Detail & Related papers (2022-12-29T15:10:59Z) - Reconfigurable co-processor architecture with limited numerical
precision to accelerate deep convolutional neural networks [0.38848561367220275]
Convolutional Neural Networks (CNNs) are widely used in deep learning applications, e.g. visual systems, robotics etc.
Here, we present a model-independent reconfigurable co-processing architecture to accelerate CNNs.
In contrast to existing solutions, we introduce limited precision 32 bit Q-format fixed point quantization for arithmetic representations and operations.
arXiv Detail & Related papers (2021-08-21T09:50:54Z) - Towards Accurate and Compact Architectures via Neural Architecture
Transformer [95.4514639013144]
It is necessary to optimize the operations inside an architecture to improve the performance without introducing extra computational cost.
We have proposed a Neural Architecture Transformer (NAT) method which casts the optimization problem into a Markov Decision Process (MDP)
We propose a Neural Architecture Transformer++ (NAT++) method which further enlarges the set of candidate transitions to improve the performance of architecture optimization.
arXiv Detail & Related papers (2021-02-20T09:38:10Z) - Phase Retrieval using Expectation Consistent Signal Recovery Algorithm
based on Hypernetwork [73.94896986868146]
Phase retrieval is an important component in modern computational imaging systems.
Recent advances in deep learning have opened up a new possibility for robust and fast PR.
We develop a novel framework for deep unfolding to overcome the existing limitations.
arXiv Detail & Related papers (2021-01-12T08:36:23Z) - EfficientFCN: Holistically-guided Decoding for Semantic Segmentation [49.27021844132522]
State-of-the-art semantic segmentation algorithms are mostly based on dilated Fully Convolutional Networks (dilatedFCN)
We propose the EfficientFCN, whose backbone is a common ImageNet pre-trained network without any dilated convolution.
Such a framework achieves comparable or even better performance than state-of-the-art methods with only 1/3 of the computational cost.
arXiv Detail & Related papers (2020-08-24T14:48:23Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.