Numerical Stability of DeepGOPlus Inference
- URL: http://arxiv.org/abs/2212.06361v4
- Date: Wed, 28 Feb 2024 18:38:19 GMT
- Title: Numerical Stability of DeepGOPlus Inference
- Authors: In\'es Gonzalez Pepe, Yohan Chatelain, Gregory Kiar, Tristan Glatard
- Abstract summary: Convolutional neural networks (CNNs) are currently among the most widely-used deep neural network (DNN) architectures.
Recent works have highlighted numerical stability challenges in DNNs, which also relates to their known sensitivity to noise injection.
This paper investigates DeepGOPlus, a CNN that predicts protein function.
- Score: 1.5361702135159845
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional neural networks (CNNs) are currently among the most widely-used
deep neural network (DNN) architectures available and achieve state-of-the-art
performance for many problems. Originally applied to computer vision tasks,
CNNs work well with any data with a spatial relationship, besides images, and
have been applied to different fields. However, recent works have highlighted
numerical stability challenges in DNNs, which also relates to their known
sensitivity to noise injection. These challenges can jeopardise their
performance and reliability. This paper investigates DeepGOPlus, a CNN that
predicts protein function. DeepGOPlus has achieved state-of-the-art performance
and can successfully take advantage and annotate the abounding protein
sequences emerging in proteomics. We determine the numerical stability of the
model's inference stage by quantifying the numerical uncertainty resulting from
perturbations of the underlying floating-point data. In addition, we explore
the opportunity to use reduced-precision floating point formats for DeepGOPlus
inference, to reduce memory consumption and latency. This is achieved by
instrumenting DeepGOPlus' execution using Monte Carlo Arithmetic, a technique
that experimentally quantifies floating point operation errors and VPREC, a
tool that emulates results with customizable floating point precision formats.
Focus is placed on the inference stage as it is the primary deliverable of the
DeepGOPlus model, widely applicable across different environments. All in all,
our results show that although the DeepGOPlus CNN is very stable numerically,
it can only be selectively implemented with lower-precision floating-point
formats. We conclude that predictions obtained from the pre-trained DeepGOPlus
model are very reliable numerically, and use existing floating-point formats
efficiently.
Related papers
- Give Me FP32 or Give Me Death? Challenges and Solutions for Reproducible Reasoning [54.970571745690634]
This work presents the first systematic investigation into how numerical precision affects Large Language Models inference.<n>We develop a lightweight inference pipeline, dubbed LayerCast, that stores weights in 16-bit precision but performs all computations in FP32.<n>Inspired by this, we develop a lightweight inference pipeline, dubbed LayerCast, that stores weights in 16-bit precision but performs all computations in FP32.
arXiv Detail & Related papers (2025-06-11T08:23:53Z) - Evaluating Single Event Upsets in Deep Neural Networks for Semantic Segmentation: an embedded system perspective [1.474723404975345]
This paper delves into the robustness assessment in embedded Deep Neural Networks (DNNs)
By scrutinizing the layer-by-layer and bit-by-bit sensitivity of various encoder-decoder models to soft errors, this study thoroughly investigates the vulnerability of segmentation DNNs to SEUs.
We propose a set of practical lightweight error mitigation techniques with no memory or computational cost suitable for resource-constrained deployments.
arXiv Detail & Related papers (2024-12-04T18:28:38Z) - ZOBNN: Zero-Overhead Dependable Design of Binary Neural Networks with Deliberately Quantized Parameters [0.0]
In this paper, we introduce a third advantage of very low-precision neural networks: improved fault-tolerance.
We investigate the impact of memory faults on state-of-the-art binary neural networks (BNNs) through comprehensive analysis.
We propose a technique to improve BNN dependability by restricting the range of float parameters through a novel deliberately uniform quantization.
arXiv Detail & Related papers (2024-07-06T05:31:11Z) - Efficient Privacy-Preserving Convolutional Spiking Neural Networks with
FHE [1.437446768735628]
Homomorphic Encryption (FHE) is a key technology for privacy-preserving computation.
FHE has limitations in processing continuous non-polynomial functions.
We present a framework called FHE-DiCSNN for homomorphic SNNs.
FHE-DiCSNN achieves an accuracy of 97.94% on ciphertexts, with a loss of only 0.53% compared to the original network's accuracy of 98.47%.
arXiv Detail & Related papers (2023-09-16T15:37:18Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - Blurs Make Results Clearer: Spatial Smoothings to Improve Accuracy,
Uncertainty, and Robustness [0.0]
Bayesian neural networks (BNNs) have shown success in the areas of uncertainty estimation and robustness.
We propose spatial smoothing, a method that ensembles neighboring feature map points of CNNs.
By simply adding a few blur layers to the models, we empirically show that the spatial smoothing improves accuracy, uncertainty estimation, and robustness of BNNs.
arXiv Detail & Related papers (2021-05-26T15:58:11Z) - Physics-aware deep neural networks for surrogate modeling of turbulent
natural convection [0.0]
We investigate the use of PINNs surrogate modeling for turbulent Rayleigh-B'enard convection flows.
We show how it comes to play as a regularization close to the training boundaries which are zones of poor accuracy for standard PINNs.
The predictive accuracy of the surrogate over the entire half a billion DNS coordinates yields errors for all flow variables ranging between [0.3% -- 4%] in the relative L 2 norm.
arXiv Detail & Related papers (2021-03-05T09:48:57Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - AQD: Towards Accurate Fully-Quantized Object Detection [94.06347866374927]
We propose an Accurate Quantized object Detection solution, termed AQD, to get rid of floating-point computation.
Our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes.
arXiv Detail & Related papers (2020-07-14T09:07:29Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Learnable Bernoulli Dropout for Bayesian Deep Learning [53.79615543862426]
Learnable Bernoulli dropout (LBD) is a new model-agnostic dropout scheme that considers the dropout rates as parameters jointly optimized with other model parameters.
LBD leads to improved accuracy and uncertainty estimates in image classification and semantic segmentation.
arXiv Detail & Related papers (2020-02-12T18:57:14Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.