Machine Learning of the Prime Distribution
- URL: http://arxiv.org/abs/2403.12588v2
- Date: Sun, 2 Jun 2024 17:18:40 GMT
- Title: Machine Learning of the Prime Distribution
- Authors: Alexander Kolpakov, A. Alistair Rocke,
- Abstract summary: We provide a theoretical argument explaining the experimental observations of Yang-Hui He about the learnability of primes.
We also posit that the ErdHos-Kac law would very unlikely be discovered by current machine learning techniques.
- Score: 49.84018914962972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the present work we use maximum entropy methods to derive several theorems in probabilistic number theory, including a version of the Hardy-Ramanujan Theorem. We also provide a theoretical argument explaining the experimental observations of Yang-Hui He about the learnability of primes, and posit that the Erd\H{o}s-Kac law would very unlikely be discovered by current machine learning techniques. Numerical experiments that we perform corroborate our theoretical findings.
Related papers
- Another quantum version of Sanov theorem [53.64687146666141]
We study how to extend Sanov theorem to the quantum setting.
We propose another quantum version of Sanov theorem by considering the quantum analog of the empirical distribution.
arXiv Detail & Related papers (2024-07-26T07:46:30Z) - A Theory of Machine Learning [0.0]
We show that this theory challenges common assumptions in the statistical and the computational learning theories.
We briefly discuss some case studies from natural language processing and macroeconomics from the perspective of the new theory.
arXiv Detail & Related papers (2024-07-07T23:57:10Z) - Experimental test of the Crooks fluctuation theorem in a single nuclear
spin [9.14219151636117]
We experimentally test the Crooks fluctuation theorem in a quantum spin system.
Our results provide a quantum insight into fluctuations and the methods we developed can be utilized to study other quantum thermodynamic theorems.
arXiv Detail & Related papers (2024-01-31T08:17:32Z) - Connecting classical finite exchangeability to quantum theory [69.62715388742298]
Exchangeability is a fundamental concept in probability theory and statistics.
We show how a de Finetti-like representation theorem for finitely exchangeable sequences requires a mathematical representation which is formally equivalent to quantum theory.
arXiv Detail & Related papers (2023-06-06T17:15:19Z) - One-shot and asymptotic classical capacity in general physical theories [0.0]
We consider hypothesis testing relative entropy and one-shot classical capacity, that is, the optimal rate of classical information transmitted.
Applying the above two bounds, we prove the equivalence between classical capacity and hypothesis testing relative entropy even in any general physical theory.
arXiv Detail & Related papers (2023-03-07T18:52:17Z) - Correspondence Between the Energy Equipartition Theorem in Classical
Mechanics and its Phase-Space Formulation in Quantum Mechanics [62.997667081978825]
In quantum mechanics, the energy per degree of freedom is not equally distributed.
We show that in the high-temperature regime, the classical result is recovered.
arXiv Detail & Related papers (2022-05-24T20:51:03Z) - Williamson theorem in classical, quantum, and statistical physics [0.0]
We show that applying the Williamson theorem reveals the normal-mode coordinates and frequencies of the system in the Hamiltonian scenario.
A more advanced topic concerning uncertainty relations is developed to show once more its utility in a distinct and modern perspective.
arXiv Detail & Related papers (2021-06-22T17:59:59Z) - General Probabilistic Theories with a Gleason-type Theorem [0.0]
Gleason-type theorems for quantum theory allow one to recover the quantum state space.
We identify the class of general probabilistic theories which also admit Gleason-type theorems.
arXiv Detail & Related papers (2020-05-28T17:29:29Z) - Marginal likelihood computation for model selection and hypothesis
testing: an extensive review [66.37504201165159]
This article provides a comprehensive study of the state-of-the-art of the topic.
We highlight limitations, benefits, connections and differences among the different techniques.
Problems and possible solutions with the use of improper priors are also described.
arXiv Detail & Related papers (2020-05-17T18:31:58Z) - Learning to Prove Theorems by Learning to Generate Theorems [71.46963489866596]
We learn a neural generator that automatically synthesizes theorems and proofs for the purpose of training a theorem prover.
Experiments on real-world tasks demonstrate that synthetic data from our approach improves the theorem prover.
arXiv Detail & Related papers (2020-02-17T16:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.