Safe Use of Neural Networks
- URL: http://arxiv.org/abs/2306.08086v1
- Date: Tue, 13 Jun 2023 19:07:14 GMT
- Title: Safe Use of Neural Networks
- Authors: George Redinbo
- Abstract summary: We use number based codes that can detect arithmetic errors in the network's processing steps.
One set of parities is obtained from a section's outputs while a second comparable set is developed directly from the original inputs.
We focus on using long numerically based convolutional codes because of the large size of data sets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks in modern communication systems can be susceptible to
internal numerical errors that can drastically effect decision results. Such
structures are composed of many sections each of which generally contain
weighting operations and activation function evaluations. The safe use comes
from methods employing number based codes that can detect arithmetic errors in
the network's processing steps. Each set of operations generates parity values
dictated by a code in two ways. One set of parities is obtained from a
section's outputs while a second comparable set is developed directly from the
original inputs. The parity values protecting the activation functions involve
a Taylor series approximation to the activation functions. We focus on using
long numerically based convolutional codes because of the large size of data
sets. The codes are based on Discrete Fourier Transform kernels and there are
many design options available. Mathematical program simulations show our
error-detecting techniques are effective and efficient.
Related papers
- Your Network May Need to Be Rewritten: Network Adversarial Based on High-Dimensional Function Graph Decomposition [0.994853090657971]
We propose a network adversarial method to address the aforementioned challenges.
This is the first method to use different activation functions in a network.
We have achieved a substantial improvement over standard activation functions regarding both training efficiency and predictive accuracy.
arXiv Detail & Related papers (2024-05-04T11:22:30Z) - Designed Dithering Sign Activation for Binary Neural Networks [15.087814338685968]
This work proposes an activation that applies multiple thresholds following dithering principles, shifting the Sign activation function for each pixel according to a spatially periodic threshold kernel.
Experiments over the classification task demonstrate the effectiveness of the designed dithering Sign activation function as an alternative activation for binary neural networks, without increasing the computational cost.
arXiv Detail & Related papers (2024-05-03T16:27:39Z) - FoC: Figure out the Cryptographic Functions in Stripped Binaries with LLMs [54.27040631527217]
We propose a novel framework called FoC to Figure out the Cryptographic functions in stripped binaries.
FoC-BinLLM outperforms ChatGPT by 14.61% on the ROUGE-L score.
FoC-Sim outperforms the previous best methods with a 52% higher Recall@1.
arXiv Detail & Related papers (2024-03-27T09:45:33Z) - Expressive Power of ReLU and Step Networks under Floating-Point Operations [11.29958155597398]
We show that neural networks using a binary threshold unit or ReLU can memorize any finite input/output pairs.
We also show similar results on memorization and universal approximation when floating-point operations use finite bits for both significand and exponent.
arXiv Detail & Related papers (2024-01-26T05:59:40Z) - Provable Data Subset Selection For Efficient Neural Network Training [73.34254513162898]
We introduce the first algorithm to construct coresets for emphRBFNNs, i.e., small weighted subsets that approximate the loss of the input data on any radial basis function network.
We then perform empirical evaluations on function approximation and dataset subset selection on popular network architectures and data sets.
arXiv Detail & Related papers (2023-03-09T10:08:34Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - HD-cos Networks: Efficient Neural Architectures for Secure Multi-Party
Computation [26.67099154998755]
Multi-party computation (MPC) is a branch of cryptography where multiple non-colluding parties execute a protocol to securely compute a function.
We study training and inference of neural networks under the MPC setup.
We show that both of the approaches enjoy strong theoretical motivations and efficient computation under the MPC setup.
arXiv Detail & Related papers (2021-10-28T21:15:11Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - A sparse code increases the speed and efficiency of neuro-dynamic
programming for optimal control tasks with correlated inputs [0.0]
A sparse code is used to represent natural images in an optimal control task solved with neuro-dynamic programming.
A 2.25 times over-complete sparse code is shown to at least double memory capacity compared with a complete sparse code using the same input.
This is used in sequential learning to store a potentially large number of optimal control tasks in the network.
arXiv Detail & Related papers (2020-06-22T01:58:11Z) - Automatic Differentiation in ROOT [62.997667081978825]
In mathematics and computer algebra, automatic differentiation (AD) is a set of techniques to evaluate the derivative of a function specified by a computer program.
This paper presents AD techniques available in ROOT, supported by Cling, to produce derivatives of arbitrary C/C++ functions.
arXiv Detail & Related papers (2020-04-09T09:18:50Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.