A Search for Good Pseudo-random Number Generators : Survey and Empirical Studies
- URL: http://arxiv.org/abs/1811.04035v2
- Date: Sun, 17 Aug 2025 14:44:16 GMT
- Title: A Search for Good Pseudo-random Number Generators : Survey and Empirical Studies
- Authors: Kamalika Bhattacharjee, Sukanta Das,
- Abstract summary: The genre of PRNGs developed so far are explored and classified into three groups -- linear congruential generator based, linear feedback shift register based and cellular automata based.<n>Overall $30$ PRNGs are selected in this way on which two types of empirical testing are done -- blind statistical tests with Diehard battery of tests, battery emphrabbit of TestU01 library and NIST statistical test-suite.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper targets to search so-called \emph{good} generators by doing a brief survey over the generators developed in the history of pseudo-random number generators (PRNGs), verify their claims and rank them based on strong empirical tests in same platforms. To do this, the genre of PRNGs developed so far are explored and classified into three groups -- linear congruential generator based, linear feedback shift register based and cellular automata based. From each group, the well-known widely used generators which claimed themselves to be `\emph{good}' are chosen. Overall $30$ PRNGs are selected in this way on which two types of empirical testing are done -- blind statistical tests with Diehard battery of tests, battery \emph{rabbit} of TestU01 library and NIST statistical test-suite as well as graphical tests (lattice test and space-time diagram test). Finally, the selected PRNGs are divided into $24$ groups and are ranked according to their overall performance in all empirical tests.
Related papers
- Tuning Random Generators: Property-Based Testing as Probabilistic Programming [19.843056237039516]
Property-based testing (PBT) validates software against an executable specification by evaluating it on randomly generated inputs.<n>The standard way that PBT users generate test inputs is via generators that describe how to sample test inputs through random choices.<n>We develop techniques for the automatic and offline tuning of generators.
arXiv Detail & Related papers (2025-08-20T03:45:13Z) - Statistical Quality and Reproducibility of Pseudorandom Number Generators in Machine Learning technologies [0.0]
We compare the statistical quality of PRNGs used in ML frameworks against their original C implementations.<n>Our findings challenge claims of statistical robustness, revealing that even generators labeled ''crush-resistant'' (e.g., PCG, Philox) may fail certain statistical tests.
arXiv Detail & Related papers (2025-07-02T09:38:00Z) - How to Select Datapoints for Efficient Human Evaluation of NLG Models? [57.60407340254572]
We develop and analyze a suite of selectors to get the most informative datapoints for human evaluation.<n>We show that selectors based on variance in automated metric scores, diversity in model outputs, or Item Response Theory outperform random selection.<n>In particular, we introduce source-based estimators, which predict item usefulness for human evaluation just based on the source texts.
arXiv Detail & Related papers (2025-01-30T10:33:26Z) - Learning test generators for cyber-physical systems [2.4171019220503402]
Black-box runtime verification methods for cyber-physical systems can be used to discover errors in systems whose inputs and outputs are expressed as signals over time.
Existing methods, such as requirement falsification, often focus on finding a single input that is a counterexample to system correctness.
We show how to create test generators that can produce multiple and diverse counterexamples for a single requirement.
arXiv Detail & Related papers (2024-10-04T07:34:02Z) - To what extent are multiple pendulum systems viable in pseudo-random number generation? [0.0]
This paper explores the development and viability of an alternative pseudorandom number generator (PRNG)
Traditional PRNGs, notably the one implemented in the Java.Random class, suffer from predictability which gives rise to exploitability.
This study proposes a novel PRNG designed using ordinary differential equations, physics modeling, and chaos theory.
arXiv Detail & Related papers (2024-04-15T00:28:51Z) - Statistical testing of random number generators and their improvement using randomness extraction [0.0]
Random number generators (RNGs) are notoriously challenging to build and test, especially for cryptographic applications.<n>We design, implement, and present various post-processing methods, using randomness extractors, to improve the RNG output quality.<n>We introduce a comprehensive statistical testing environment, based on existing test suites, that can be parametrised for lightweight (fast) to intensive testing.
arXiv Detail & Related papers (2024-03-27T16:05:02Z) - A Block Metropolis-Hastings Sampler for Controllable Energy-based Text
Generation [78.81021361497311]
We develop a novel Metropolis-Hastings (MH) sampler that proposes re-writes of the entire sequence in each step via iterative prompting of a large language model.
Our new sampler allows for more efficient and accurate sampling from a target distribution and (b) allows generation length to be determined through the sampling procedure rather than fixed in advance.
arXiv Detail & Related papers (2023-12-07T18:30:15Z) - BiasTestGPT: Using ChatGPT for Social Bias Testing of Language Models [73.29106813131818]
bias testing is currently cumbersome since the test sentences are generated from a limited set of manual templates or need expensive crowd-sourcing.
We propose using ChatGPT for the controllable generation of test sentences, given any arbitrary user-specified combination of social groups and attributes.
We present an open-source comprehensive bias testing framework (BiasTestGPT), hosted on HuggingFace, that can be plugged into any open-source PLM for bias testing.
arXiv Detail & Related papers (2023-02-14T22:07:57Z) - Joint Generator-Ranker Learning for Natural Language Generation [99.16268050116717]
JGR is a novel joint training algorithm that integrates the generator and the ranker in a single framework.
By iteratively updating the generator and the ranker, JGR can effectively harmonize their learning and enhance their quality jointly.
arXiv Detail & Related papers (2022-06-28T12:58:30Z) - A Well-Composed Text is Half Done! Composition Sampling for Diverse
Conditional Generation [79.98319703471596]
We propose Composition Sampling, a simple but effective method to generate diverse outputs for conditional generation of higher quality.
It builds on recently proposed plan-based neural generation models that are trained to first create a composition of the output and then generate by conditioning on it and the input.
arXiv Detail & Related papers (2022-03-28T21:24:03Z) - An Evaluation Study of Generative Adversarial Networks for Collaborative
Filtering [75.83628561622287]
This work successfully replicates the results published in the original paper and discusses the impact of certain differences between the CFGAN framework and the model used in the original evaluation.
The work further expands the experimental analysis comparing CFGAN against a selection of simple and well-known properly optimized baselines, observing that CFGAN is not consistently competitive against them despite its high computational cost.
arXiv Detail & Related papers (2022-01-05T20:53:27Z) - Sampling-Decomposable Generative Adversarial Recommender [84.05894139540048]
We propose a Sampling-Decomposable Generative Adversarial Recommender (SD-GAR)
In the framework, the divergence between some generator and the optimum is compensated by self-normalized importance sampling.
We extensively evaluate the proposed algorithm with five real-world recommendation datasets.
arXiv Detail & Related papers (2020-11-02T13:19:10Z) - Noisy Adaptive Group Testing using Bayesian Sequential Experimental
Design [63.48989885374238]
When the infection prevalence of a disease is low, Dorfman showed 80 years ago that testing groups of people can prove more efficient than testing people individually.
Our goal in this paper is to propose new group testing algorithms that can operate in a noisy setting.
arXiv Detail & Related papers (2020-04-26T23:41:33Z) - Self-Adversarial Learning with Comparative Discrimination for Text
Generation [111.18614166615968]
We propose a novel self-adversarial learning (SAL) paradigm for improving GANs' performance in text generation.
During training, SAL rewards the generator when its currently generated sentence is found to be better than its previously generated samples.
Experiments on text generation benchmark datasets show that our proposed approach substantially improves both the quality and the diversity.
arXiv Detail & Related papers (2020-01-31T07:50:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.