Stuck-at Faults in ReRAM Neuromorphic Circuit Array and their Correction
through Machine Learning
- URL: http://arxiv.org/abs/2402.10981v1
- Date: Thu, 15 Feb 2024 22:51:27 GMT
- Title: Stuck-at Faults in ReRAM Neuromorphic Circuit Array and their Correction
through Machine Learning
- Authors: Vedant Sawal and Hiu Yung Wong
- Abstract summary: We study the inference accuracy of the Resistive Random Access Memory (ReRAM) neuromorphic circuit due to stuck-at faults.
We propose a machine learning (ML) strategy to recover the inference accuracy degradation due to stuck-at faults.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we study the inference accuracy of the Resistive Random Access
Memory (ReRAM) neuromorphic circuit due to stuck-at faults (stuck-on,
stuck-off, and stuck at a certain resistive value). A simulation framework
using Python is used to perform supervised machine learning (neural network
with 3 hidden layers, 1 input layer, and 1 output layer) of handwritten digits
and construct a corresponding fully analog neuromorphic circuit (4 synaptic
arrays) simulated by Spectre. A generic 45nm Process Development Kit (PDK) was
used. We study the difference in the inference accuracy degradation due to
stuck-on and stuck-off defects. Various defect patterns are studied including
circular, ring, row, column, and circular-complement defects. It is found that
stuck-on and stuck-off defects have a similar effect on inference accuracy.
However, it is also found that if there is a spatial defect variation across
the columns, the inference accuracy may be degraded significantly. We also
propose a machine learning (ML) strategy to recover the inference accuracy
degradation due to stuck-at faults. The inference accuracy is improved from 48%
to 85% in a defective neuromorphic circuit.
Related papers
- A Physics-Informed Neuro-Fuzzy Framework for Quantum Error Attribution [0.4511923587827302]
We present a neuro-fuzzy framework that addresses the attribution problem by combining Adaptive Neuro-Fuzzy Inference Systems with physics-grounded feature engineering.<n>We introduce the Bhattacharyya Veto, a hard physical constraint grounded in the Data Processing Inequality.<n>This work establishes a robust, interpretable diagnostic layer that prevents error mitigation techniques from being applied to logically flawed circuits.
arXiv Detail & Related papers (2026-02-22T16:19:51Z) - The Hidden Cost of Approximation in Online Mirror Descent [56.99972253009168]
Online mirror descent (OMD) is a fundamental algorithmic paradigm that underlies many algorithms in optimization, machine learning and sequential decision-making.<n>In this work we initiate a systematic study into inexact OMD, and uncover an intricate relation between regularizer smoothness and robustness to approximation errors.
arXiv Detail & Related papers (2025-11-27T10:09:07Z) - Application of Machine Learning for Correcting Defect-induced Neuromorphic Circuit Inference Errors [0.0]
This paper presents a machine learning-based approach to correct inference errors caused by stuck-at faults in ReRAM-based neuromorphic circuits.<n>Using a Design-Technology Co-Optimization (DTCO) simulation framework, we model and analyze six spatial defect types.<n>We demonstrate that the proposed correction method, which employs a lightweight neural network trained on the circuit's output voltages, can recover up to 35% (from 55% to 90%) inference accuracy loss in defective scenarios.
arXiv Detail & Related papers (2025-09-14T06:05:27Z) - Give Me FP32 or Give Me Death? Challenges and Solutions for Reproducible Reasoning [54.970571745690634]
This work presents the first systematic investigation into how numerical precision affects Large Language Models inference.<n>We develop a lightweight inference pipeline, dubbed LayerCast, that stores weights in 16-bit precision but performs all computations in FP32.<n>Inspired by this, we develop a lightweight inference pipeline, dubbed LayerCast, that stores weights in 16-bit precision but performs all computations in FP32.
arXiv Detail & Related papers (2025-06-11T08:23:53Z) - Characterising the failure mechanisms of error-corrected quantum logic gates [2.5128687379089687]
We use a heavy-hex code prepared on a superconducting qubit array to investigate how different noise sources impact error-corrected logic.
We identify that idling errors occurring during readout periods are highly detrimental to a quantum memory.
By varying different parameters in our simulations we identify the key noise sources that impact the fidelity of fault-tolerant logic gates.
arXiv Detail & Related papers (2025-04-09T20:29:47Z) - A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent [57.64826450787237]
We show how to analyze the behavior of distributed gradient descent algorithms in the presence of adversarial corruptions.
We show how to use ideas from (lazy) mirror descent to design a corruption-tolerant distributed optimization algorithm.
Experiments based on linear regression, support vector classification, and softmax classification on the MNIST dataset corroborate our theoretical findings.
arXiv Detail & Related papers (2024-07-19T08:29:12Z) - Generalized quantum data-syndrome codes and belief propagation decoding for phenomenological noise [6.322831694506286]
We introduce quantum data-syndrome codes along with a generalized check matrix that integrates both quaternary and binary alphabets to represent diverse error sources.
We observe that at high error rates, fewer rounds of syndrome extraction tend to perform better, while more rounds improve performance at lower error rates.
arXiv Detail & Related papers (2023-10-19T12:23:05Z) - Global Context Aggregation Network for Lightweight Saliency Detection of
Surface Defects [70.48554424894728]
We develop a Global Context Aggregation Network (GCANet) for lightweight saliency detection of surface defects on the encoder-decoder structure.
First, we introduce a novel transformer encoder on the top layer of the lightweight backbone, which captures global context information through a novel Depth-wise Self-Attention (DSA) module.
The experimental results on three public defect datasets demonstrate that the proposed network achieves a better trade-off between accuracy and running efficiency compared with other 17 state-of-the-art methods.
arXiv Detail & Related papers (2023-09-22T06:19:11Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - An Effective Data-Driven Approach for Localizing Deep Learning Faults [20.33411443073181]
We propose a novel data-driven approach that leverages model features to learn problem patterns.
Our methodology automatically links bug symptoms to their root causes, without the need for manually crafted mappings.
Our results demonstrate that our technique can effectively detect and diagnose different bug types.
arXiv Detail & Related papers (2023-07-18T03:28:39Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Clear Memory-Augmented Auto-Encoder for Surface Defect Detection [10.829080460965478]
We propose a clear memory-augmented auto-encoder to repair abnormal foregrounds and preserve clear backgrounds.
A general artificial anomaly generation algorithm is proposed to simulate anomalies that are as realistic and feature-rich as possible.
At last, we propose a novel multi scale feature residual detection method for defect segmentation.
arXiv Detail & Related papers (2022-08-08T02:39:03Z) - Fast and Accurate Error Simulation for CNNs against Soft Errors [64.54260986994163]
We present a framework for the reliability analysis of Conal Neural Networks (CNNs) via an error simulation engine.
These error models are defined based on the corruption patterns of the output of the CNN operators induced by faults.
We show that our methodology achieves about 99% accuracy of the fault effects w.r.t. SASSIFI, and a speedup ranging from 44x up to 63x w.r.t.FI, that only implements a limited set of error models.
arXiv Detail & Related papers (2022-06-04T19:45:02Z) - Machine Learning for Continuous Quantum Error Correction on
Superconducting Qubits [1.8249709209063887]
Continuous quantum error correction has been found to have certain advantages over discrete quantum error correction.
We propose a machine learning algorithm for continuous quantum error correction based on the use of a recurrent neural network.
arXiv Detail & Related papers (2021-10-20T05:13:37Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z) - Prediction of MRI Hardware Failures based on Image Features using
Ensemble Learning [8.889876750552615]
In this work, we focus on predicting failures of 20-channel Head/Neck coils using image-related measurements.
To solve this problem, we use data of two different levels. One level refers to one-dimensional features per individual coil channel on which we found a fully connected neural network to perform best.
The other data level uses matrices which represent the overall coil condition and feeds a different neural network.
arXiv Detail & Related papers (2020-01-05T11:21:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.