Application of Machine Learning for Correcting Defect-induced Neuromorphic Circuit Inference Errors
- URL: http://arxiv.org/abs/2509.11113v1
- Date: Sun, 14 Sep 2025 06:05:27 GMT
- Title: Application of Machine Learning for Correcting Defect-induced Neuromorphic Circuit Inference Errors
- Authors: Vedant Sawal, Hiu Yung Wong,
- Abstract summary: This paper presents a machine learning-based approach to correct inference errors caused by stuck-at faults in ReRAM-based neuromorphic circuits.<n>Using a Design-Technology Co-Optimization (DTCO) simulation framework, we model and analyze six spatial defect types.<n>We demonstrate that the proposed correction method, which employs a lightweight neural network trained on the circuit's output voltages, can recover up to 35% (from 55% to 90%) inference accuracy loss in defective scenarios.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a machine learning-based approach to correct inference errors caused by stuck-at faults in fully analog ReRAM-based neuromorphic circuits. Using a Design-Technology Co-Optimization (DTCO) simulation framework, we model and analyze six spatial defect types-circular, circular-complement, ring, row, column, and checkerboard-across multiple layers of a multi-array neuromorphic architecture. We demonstrate that the proposed correction method, which employs a lightweight neural network trained on the circuit's output voltages, can recover up to 35% (from 55% to 90%) inference accuracy loss in defective scenarios. Our results, based on handwritten digit recognition tasks, show that even small corrective networks can significantly improve circuit robustness. This method offers a scalable and energy-efficient path toward enhanced yield and reliability for neuromorphic systems in edge and internet-of-things (IoTs) applications. In addition to correcting the specific defect types used during training, our method also demonstrates the ability to generalize-achieving reasonable accuracy when tested on different types of defects not seen during training. The framework can be readily extended to support real-time adaptive learning, enabling on-chip correction for dynamic or aging-induced fault profiles.
Related papers
- Certified Neural Approximations of Nonlinear Dynamics [51.01318247729693]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - Balancing Robustness and Efficiency in Embedded DNNs Through Activation Function Selection [1.474723404975345]
Machine learning-based embedded systems for safety-critical applications must be robust to perturbations caused by soft errors.<n>We focus on encoder-decoder convolutional models developed for semantic segmentation of hyperspectral images with application to autonomous driving systems.
arXiv Detail & Related papers (2025-04-07T14:21:31Z) - Stuck-at Faults in ReRAM Neuromorphic Circuit Array and their Correction
through Machine Learning [0.0]
We study the inference accuracy of the Resistive Random Access Memory (ReRAM) neuromorphic circuit due to stuck-at faults.
We propose a machine learning (ML) strategy to recover the inference accuracy degradation due to stuck-at faults.
arXiv Detail & Related papers (2024-02-15T22:51:27Z) - A Reusable AI-Enabled Defect Detection System for Railway Using
Ensembled CNN [5.381374943525773]
Defect detection is crucial for ensuring the trustworthiness of railway systems.
Current approaches rely on single deep-learning models, like CNNs.
We propose a reusable AI-enabled defect detection approach.
arXiv Detail & Related papers (2023-11-24T19:45:55Z) - Cal-DETR: Calibrated Detection Transformer [67.75361289429013]
We propose a mechanism for calibrated detection transformers (Cal-DETR), particularly for Deformable-DETR, UP-DETR and DINO.
We develop an uncertainty-guided logit modulation mechanism that leverages the uncertainty to modulate the class logits.
Results corroborate the effectiveness of Cal-DETR against the competing train-time methods in calibrating both in-domain and out-domain detections.
arXiv Detail & Related papers (2023-11-06T22:13:10Z) - Targeted collapse regularized autoencoder for anomaly detection: black hole at the center [3.924781781769534]
Autoencoders can generalize beyond the normal class and achieve a small reconstruction error on some anomalous samples.
We propose a remarkably straightforward alternative: instead of adding neural network components, involved computations, and cumbersome training, we complement the reconstruction loss with a computationally light term.
This mitigates the black-box nature of autoencoder-based anomaly detection algorithms and offers an avenue for further investigation of advantages, fail cases, and potential new directions.
arXiv Detail & Related papers (2023-06-22T01:33:47Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Fast and Accurate Error Simulation for CNNs against Soft Errors [64.54260986994163]
We present a framework for the reliability analysis of Conal Neural Networks (CNNs) via an error simulation engine.
These error models are defined based on the corruption patterns of the output of the CNN operators induced by faults.
We show that our methodology achieves about 99% accuracy of the fault effects w.r.t. SASSIFI, and a speedup ranging from 44x up to 63x w.r.t.FI, that only implements a limited set of error models.
arXiv Detail & Related papers (2022-06-04T19:45:02Z) - A Robust Backpropagation-Free Framework for Images [47.97322346441165]
We present an error kernel driven activation alignment algorithm for image data.
EKDAA accomplishes through the introduction of locally derived error transmission kernels and error maps.
Results are presented for an EKDAA trained CNN that employs a non-differentiable activation function.
arXiv Detail & Related papers (2022-06-03T21:14:10Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - Supervised training of spiking neural networks for robust deployment on
mixed-signal neuromorphic processors [2.6949002029513167]
Mixed-signal analog/digital electronic circuits can emulate spiking neurons and synapses with extremely high energy efficiency.
Mismatch is expressed as differences in effective parameters between identically-configured neurons and synapses.
We present a supervised learning approach that addresses this challenge by maximizing robustness to mismatch and other common sources of noise.
arXiv Detail & Related papers (2021-02-12T09:20:49Z) - Relaxing the Constraints on Predictive Coding Models [62.997667081978825]
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs is the minimization of prediction errors.
Standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity.
In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.
arXiv Detail & Related papers (2020-10-02T15:21:37Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.