Alleviation of Temperature Variation Induced Accuracy Degradation in
Ferroelectric FinFET Based Neural Network
- URL: http://arxiv.org/abs/2103.03111v1
- Date: Wed, 3 Mar 2021 16:06:03 GMT
- Title: Alleviation of Temperature Variation Induced Accuracy Degradation in
Ferroelectric FinFET Based Neural Network
- Authors: Sourav De, Yao-Jen Lee and Darsen D. Lu
- Abstract summary: We adopt a pre-trained artificial neural network with 96.4% inference accuracy on the MNIST dataset as the baseline.
We observe a significant inference accuracy degradation in the analog neural network at 233 K for a NN trained at 300 K.
We deploy binary neural networks with "read voltage" optimization to ensure immunity of NN to accuracy degradation under temperature variation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper reports the impacts of temperature variation on the inference
accuracy of pre-trained all-ferroelectric FinFET deep neural networks, along
with plausible design techniques to abate these impacts. We adopted a
pre-trained artificial neural network (NN) with 96.4% inference accuracy on the
MNIST dataset as the baseline. As an aftermath of temperature change, the
conductance drift of a programmed cell was captured by a compact model over a
wide range of gate bias. We observe a significant inference accuracy
degradation in the analog neural network at 233 K for a NN trained at 300 K.
Finally, we deployed binary neural networks with "read voltage" optimization to
ensure immunity of NN to accuracy degradation under temperature variation,
maintaining an inference accuracy 96.1%
Related papers
- TPTNet: A Data-Driven Temperature Prediction Model Based on Turbulent
Potential Temperature [0.7575778450247893]
A data-driven model for predicting the surface temperature using neural networks was proposed to alleviate the computational burden of numerical weather prediction (NWP)
Our model, named TPTNet uses only 2m temperature measured at the weather stations of the South Korean Peninsula as input to predict the local temperature at finite forecast hours.
arXiv Detail & Related papers (2023-12-22T01:02:27Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - CorrectNet: Robustness Enhancement of Analog In-Memory Computing for
Neural Networks by Error Suppression and Compensation [4.570841222958966]
We propose a framework to enhance the robustness of neural networks under variations and noise.
We show that inference accuracy of neural networks can be recovered from as low as 1.69% under variations and noise.
arXiv Detail & Related papers (2022-11-27T19:13:33Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Enhanced physics-constrained deep neural networks for modeling vanadium
redox flow battery [62.997667081978825]
We propose an enhanced version of the physics-constrained deep neural network (PCDNN) approach to provide high-accuracy voltage predictions.
The ePCDNN can accurately capture the voltage response throughout the charge--discharge cycle, including the tail region of the voltage discharge curve.
arXiv Detail & Related papers (2022-03-03T19:56:24Z) - Differentially private training of neural networks with Langevin
dynamics forcalibrated predictive uncertainty [58.730520380312676]
We show that differentially private gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models.
This represents a serious issue for safety-critical applications, e.g. in medical diagnosis.
arXiv Detail & Related papers (2021-07-09T08:14:45Z) - Neuromorphic Computing with Deeply Scaled Ferroelectric FinFET in
Presence of Process Variation, Device Aging and Flicker Noise [0.0]
An intricate study has been conducted about the impact of such variations on the inference accuracy of pre-trained neural networks.
A statistical model has been developed to capture all these effects during neural network simulation.
We have demonstrated that the impact of degradation due to the oxide thickness scaling, (2) process variation, and (3) flicker noise can be abated in ferroelectric FinFET based binary neural networks.
arXiv Detail & Related papers (2021-03-05T03:24:20Z) - Parameterized Temperature Scaling for Boosting the Expressive Power in
Post-Hoc Uncertainty Calibration [57.568461777747515]
We introduce a novel calibration method, Parametrized Temperature Scaling (PTS)
We demonstrate that the performance of accuracy-preserving state-of-the-art post-hoc calibrators is limited by their intrinsic expressive power.
We show with extensive experiments that our novel accuracy-preserving approach consistently outperforms existing algorithms across a large number of model architectures, datasets and metrics.
arXiv Detail & Related papers (2021-02-24T10:18:30Z) - Accuracy of neural networks for the simulation of chaotic dynamics:
precision of training data vs precision of the algorithm [0.0]
We simulate the Lorenz system with different precisions using three different neural network techniques adapted to time series.
Our results show that the ESN network is better at predicting accurately the dynamics of the system.
arXiv Detail & Related papers (2020-07-08T17:25:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.