Transformer models classify random numbers
- URL: http://arxiv.org/abs/2405.03904v1
- Date: Mon, 6 May 2024 23:36:03 GMT
- Title: Transformer models classify random numbers
- Authors: Rishabh Goel, YiZi Xiao, Ramin Ramezani,
- Abstract summary: We present a deep learning model that encodes some of the tests from the NIST Statistical Test Suite (STS) in a single model that also runs much faster.
This model performs multi-label classification on these tests and outputs the probability of passing each statistical test that it encodes.
We achieve a high degree of accuracy with a sample f1 score of above 0.9.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Random numbers are incredibly important in a variety of fields, and the need for their validation remains important. A Quantum Random Number Generator (QRNG) can theoretically generate truly random numbers however this does not remove the need to thoroughly test their randomness. Generally, the task of validating random numbers has been delegated to different statistical tests such as the tests from the NIST Statistical Test Suite (STS) which are often slow and only perform one task at a time. Our work presents a deep learning model that utilizes the transformer architecture to encode some of the tests from the NIST STS in a single model that also runs much faster. This model performs multi-label classification on these tests and outputs the probability of passing each statistical test that it encodes. We perform a thorough hyper-parameter optimization to converge on the best possible model and as a result, achieve a high degree of accuracy with a sample f1 score of above 0.9.
Related papers
- Measuring model variability using robust non-parametric testing [5.519968037738177]
This work attempts to describe the relationship between deep net models trained with different random seeds and the behavior of the expected model.
We propose a novel summary statistic for network similarity, referred to as the $alpha$-trimming level.
We show that the $alpha$-trimming level is more expressive than different performance metrics like validation accuracy, churn, or expected calibration error when taken alone.
arXiv Detail & Related papers (2024-06-12T15:08:15Z) - Statistical testing of random number generators and their improvement using randomness extraction [0.0]
We present and make available a comprehensive statistical testing environment (STE) based on existing statistical test suites.
The STE can be parameterised to run lightweight (i.e. fast) all the way to intensive testing, which goes far beyond what is required by certification bodies.
We then present and implement a variety of post-processing methods, in the form of randomness extractors, which improve the RNG's output quality under different sets of assumptions.
arXiv Detail & Related papers (2024-03-27T16:05:02Z) - Precise Error Rates for Computationally Efficient Testing [75.63895690909241]
We revisit the question of simple-versus-simple hypothesis testing with an eye towards computational complexity.
An existing test based on linear spectral statistics achieves the best possible tradeoff curve between type I and type II error rates.
arXiv Detail & Related papers (2023-11-01T04:41:16Z) - A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive
Coding Networks [65.34977803841007]
Predictive coding networks are neuroscience-inspired models with roots in both Bayesian statistics and neuroscience.
We show how by simply changing the temporal scheduling of the update rule for the synaptic weights leads to an algorithm that is much more efficient and stable than the original one.
arXiv Detail & Related papers (2022-11-16T00:11:04Z) - Validation tests of GBS quantum computers give evidence for quantum
advantage with a decoherent target [62.997667081978825]
We use positive-P phase-space simulations of grouped count probabilities as a fingerprint for verifying multi-mode data.
We show how one can disprove faked data, and apply this to a classical count algorithm.
arXiv Detail & Related papers (2022-11-07T12:00:45Z) - Testing randomness of series generated in Bell's experiment [62.997667081978825]
We use a toy fiber optic based setup to generate binary series, and evaluate their level of randomness according to Ville principle.
Series are tested with a battery of standard statistical indicators, Hurst, Kolmogorov complexity, minimum entropy, Takensarity dimension of embedding, and Augmented Dickey Fuller and Kwiatkowski Phillips Schmidt Shin to check station exponent.
The level of randomness of series obtained by applying Toeplitz extractor to rejected series is found to be indistinguishable from the level of non-rejected raw ones.
arXiv Detail & Related papers (2022-08-31T17:39:29Z) - Learning to Increase the Power of Conditional Randomization Tests [8.883733362171032]
The model-X conditional randomization test is a generic framework for conditional independence testing.
We introduce novel model-fitting schemes that are designed to explicitly improve the power of model-X tests.
arXiv Detail & Related papers (2022-07-03T12:29:25Z) - TTAPS: Test-Time Adaption by Aligning Prototypes using Self-Supervision [70.05605071885914]
We propose a novel modification of the self-supervised training algorithm SwAV that adds the ability to adapt to single test samples.
We show the success of our method on the common benchmark dataset CIFAR10-C.
arXiv Detail & Related papers (2022-05-18T05:43:06Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z) - Composite Goodness-of-fit Tests with Kernels [19.744607024807188]
We propose a kernel-based hypothesis tests for the challenging composite testing problem.
Our tests make use of minimum distance estimators based on the maximum mean discrepancy and the kernel Stein discrepancy.
As our main result, we show that we are able to estimate the parameter and conduct our test on the same data, while maintaining a correct test level.
arXiv Detail & Related papers (2021-11-19T15:25:06Z) - Searching for evidence of algorithmic randomness and incomputability in
the output of quantum random number generators [0.0]
Ideal quantum random number generators (QRNGs) can produce algorithmically random and thus incomputable sequences.
We present the results of a search for algorithmic randomness and incomputability in the output from two different QRNGs.
arXiv Detail & Related papers (2021-01-04T21:17:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.