Error Correction Capabilities of Non-Linear Cryptographic Hash Functions
- URL: http://arxiv.org/abs/2405.01495v1
- Date: Thu, 2 May 2024 17:26:56 GMT
- Title: Error Correction Capabilities of Non-Linear Cryptographic Hash Functions
- Authors: Alejandro Cohen, Rafael G. L. D'Oliveira,
- Abstract summary: Linear hashes are known to possess error-correcting capabilities.
In most applications, non-linear hashes with pseudorandom outputs are utilized instead.
We show that non-linear hashes might also exhibit good error-correcting capabilities.
- Score: 56.368766255147555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Linear hashes are known to possess error-correcting capabilities. However, in most applications, non-linear hashes with pseudorandom outputs are utilized instead. It has also been established that classical non-systematic random codes, both linear and non-linear, are capacity achieving in the asymptotic regime. Thus, it is reasonable to expect that non-linear hashes might also exhibit good error-correcting capabilities. In this paper, we show this to be the case. Our proof is based on techniques from multiple access channels. As a consequence, we show that Systematic Random Non-Linear Codes (S-RNLC) are capacity achieving in the asymptotic regime. We validate our results by comparing the performance of the Secure Hash Algorithm (SHA) with that of Systematic Random Linear Codes (SRLC) and S-RNLC, demonstrating that SHA performs equally.
Related papers
- Error correction for encoded quantum annealing revisited [0.0]
A parity-encoded spin system, known as the Sourlas-Lechner-Hauke-Zoller (SLHZ) system, exhibits error-correcting capability.
We propose a very simple decoding algorithm to eliminate errors in the readout of SLHZ systems.
Our new algorithm can be thought of as a bit-flipping algorithm for LDPC codes.
arXiv Detail & Related papers (2024-07-22T08:47:00Z) - Anti-Exploration by Random Network Distillation [63.04360288089277]
We show that a naive choice of conditioning for the Random Network Distillation (RND) is not discriminative enough to be used as an uncertainty estimator.
We show that this limitation can be avoided with conditioning based on Feature-wise Linear Modulation (FiLM)
We evaluate it on the D4RL benchmark, showing that it is capable of achieving performance comparable to ensemble-based methods and outperforming ensemble-free approaches by a wide margin.
arXiv Detail & Related papers (2023-01-31T13:18:33Z) - Testing randomness of series generated in Bell's experiment [62.997667081978825]
We use a toy fiber optic based setup to generate binary series, and evaluate their level of randomness according to Ville principle.
Series are tested with a battery of standard statistical indicators, Hurst, Kolmogorov complexity, minimum entropy, Takensarity dimension of embedding, and Augmented Dickey Fuller and Kwiatkowski Phillips Schmidt Shin to check station exponent.
The level of randomness of series obtained by applying Toeplitz extractor to rejected series is found to be indistinguishable from the level of non-rejected raw ones.
arXiv Detail & Related papers (2022-08-31T17:39:29Z) - Bayesian Inference with Nonlinear Generative Models: Comments on Secure
Learning [29.818395770651865]
This work aims to bring attention to nonlinear generative models and their secrecy potential.
We invoke the replica method to derive normalized cross entropy in an inverse probability problem.
We propose a new secure coding scheme which achieves the secrecy capacity of the wiretap channel.
arXiv Detail & Related papers (2022-01-19T08:29:53Z) - Training Recurrent Neural Networks by Sequential Least Squares and the
Alternating Direction Method of Multipliers [0.20305676256390928]
We propose the use of convex and twice-differentiable loss and regularization terms for determining optimal hidden network parameters.
We combine sequential least squares with alternating direction multipliers.
The performance of the algorithm is tested in a nonlinear system identification benchmark.
arXiv Detail & Related papers (2021-12-31T08:43:04Z) - Misspecified Gaussian Process Bandit Optimization [59.30399661155574]
Kernelized bandit algorithms have shown strong empirical and theoretical performance for this problem.
We introduce a emphmisspecified kernelized bandit setting where the unknown function can be $epsilon$--uniformly approximated by a function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS)
We show that our algorithm achieves optimal dependence on $epsilon$ with no prior knowledge of misspecification.
arXiv Detail & Related papers (2021-11-09T09:00:02Z) - Gaussian Process Uniform Error Bounds with Unknown Hyperparameters for
Safety-Critical Applications [71.23286211775084]
We introduce robust Gaussian process uniform error bounds in settings with unknown hyper parameters.
Our approach computes a confidence region in the space of hyper parameters, which enables us to obtain a probabilistic upper bound for the model error.
Experiments show that the bound performs significantly better than vanilla and fully Bayesian processes.
arXiv Detail & Related papers (2021-09-06T17:10:01Z) - Quantum and Randomised Algorithms for Non-linearity Estimation [0.5584060970507505]
Non-linearity of a function indicates how far it is from any linear function.
We design highly efficient quantum and randomised algorithms to approximate the non-linearity additive error.
arXiv Detail & Related papers (2021-03-14T14:13:50Z) - Tight Second-Order Certificates for Randomized Smoothing [106.06908242424481]
We show that there also exists a universal curvature-like bound for Gaussian random smoothing.
In addition to proving the correctness of this novel certificate, we show that SoS certificates are realizable and therefore tight.
arXiv Detail & Related papers (2020-10-20T18:03:45Z) - What needles do sparse neural networks find in nonlinear haystacks [0.0]
A sparsity inducing penalty in artificial neural networks (ANNs) avoids over-fitting, especially in situations where noise is high and the training set is small.
For linear models, such an approach provably also recovers the important features with high probability in regimes for a well-chosen penalty parameter.
We perform a set of comprehensive Monte Carlo simulations on a simple model, and the numerical results show the effectiveness of the proposed approach.
arXiv Detail & Related papers (2020-06-07T04:46:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.