Concurrent Self-testing of Neural Networks Using Uncertainty Fingerprint
- URL: http://arxiv.org/abs/2401.01458v1
- Date: Tue, 2 Jan 2024 23:05:07 GMT
- Title: Concurrent Self-testing of Neural Networks Using Uncertainty Fingerprint
- Authors: Soyed Tuhin Ahmed, Mehdi B. tahoori
- Abstract summary: We propose a dual head NN topology specifically designed to produce uncertainty fingerprints and the primary prediction of the NN in empha single shot.
Compared to existing works, memory overhead is reduced by up to $243.7$ MB, multiply and accumulate (MAC) operation is reduced by up to $10000times$, and false-positive rates are reduced by up to $89%$.
- Score: 0.32634122554914
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural networks (NNs) are increasingly used in always-on safety-critical
applications deployed on hardware accelerators (NN-HAs) employing various
memory technologies. Reliable continuous operation of NN is essential for
safety-critical applications. During online operation, NNs are susceptible to
single and multiple permanent and soft errors due to factors such as radiation,
aging, and thermal effects. Explicit NN-HA testing methods cannot detect
transient faults during inference, are unsuitable for always-on applications,
and require extensive test vector generation and storage. Therefore, in this
paper, we propose the \emph{uncertainty fingerprint} approach representing the
online fault status of NN. Furthermore, we propose a dual head NN topology
specifically designed to produce uncertainty fingerprints and the primary
prediction of the NN in \emph{a single shot}. During the online operation, by
matching the uncertainty fingerprint, we can concurrently self-test NNs with up
to $100\%$ coverage with a low false positive rate while maintaining a similar
performance of the primary task. Compared to existing works, memory overhead is
reduced by up to $243.7$ MB, multiply and accumulate (MAC) operation is reduced
by up to $10000\times$, and false-positive rates are reduced by up to $89\%$.
Related papers
- Fast and Reliable $N-k$ Contingency Screening with Input-Convex Neural Networks [3.490170135411753]
Power system operators must ensure that dispatch decisions remain feasible in case of grid outages or contingencies to prevent failures and ensure reliable operation.
Check the feasibility of all $N - k$ contingencies is intractable for even small $k$ grid components.
In this work, we propose use input- cascading neural networks (ICNNs) for contingency screening.
arXiv Detail & Related papers (2024-10-01T15:38:09Z) - Neural Network Verification with Branch-and-Bound for General Nonlinearities [63.39918329535165]
Branch-and-bound (BaB) is among the most effective techniques for neural network (NN) verification.
We develop a general framework, named GenBaB, to conduct BaB on general nonlinearities to verify NNs with general architectures.
We demonstrate the effectiveness of our GenBaB on verifying a wide range of NNs, including NNs with activation functions such as Sigmoid, Tanh, Sine and GeLU.
arXiv Detail & Related papers (2024-05-31T17:51:07Z) - Few-Shot Testing: Estimating Uncertainty of Memristive Deep Neural Networks Using One Bayesian Test Vector [0.0]
We propose a test vector generation framework that can estimate the model uncertainty of NNs implemented on memristor-based CIM hardware.
Our method is evaluated on different model dimensions, tasks, fault rates, and variation noise to show that it can consistently achieve $100%$ coverage with only $0.024$ MB of memory overhead.
arXiv Detail & Related papers (2024-05-29T08:53:16Z) - Provably Safe Neural Network Controllers via Differential Dynamic Logic [2.416907802598482]
We present the first general approach that allows reusing control theory results for NNCS verification.
Based on provably safe control envelopes in dL, we derive specifications for the NN which is proven via NN verification.
We show that a proof of the NN adhering to the specification is mirrored by a dL proof on the infinite-time safety of the NNCS.
arXiv Detail & Related papers (2024-02-16T16:15:25Z) - Generalization Guarantees of Gradient Descent for Multi-Layer Neural
Networks [55.86300309474023]
We conduct a comprehensive stability and generalization analysis of gradient descent (GD) for multi-layer NNs.
We derive the excess risk rate of $O(1/sqrtn)$ for GD algorithms in both two-layer and three-layer NNs.
arXiv Detail & Related papers (2023-05-26T12:51:38Z) - One-Shot Online Testing of Deep Neural Networks Based on Distribution
Shift Detection [0.6091702876917281]
We propose a emphone-shot testing approach that can test NNs accelerated on memristive crossbars with only one test vector.
Our approach can consistently achieve $100%$ fault coverage across several large topologies.
arXiv Detail & Related papers (2023-05-16T11:06:09Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Automated Repair of Neural Networks [0.26651200086513094]
We introduce a framework for repairing unsafe NNs w.r.t. safety specification.
Our method is able to search for a new, safe NN representation, by modifying only a few of its weight values.
We perform extensive experiments which demonstrate the capability of our proposed framework to yield safe NNs w.r.t.
arXiv Detail & Related papers (2022-07-17T12:42:24Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.