A Theory of Machine Learning
- URL: http://arxiv.org/abs/2407.05520v1
- Date: Sun, 7 Jul 2024 23:57:10 GMT
- Title: A Theory of Machine Learning
- Authors: Jinsook Kim, Jinho Kang,
- Abstract summary: We show that this theory challenges common assumptions in the statistical and the computational learning theories.
We briefly discuss some case studies from natural language processing and macroeconomics from the perspective of the new theory.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We critically review three major theories of machine learning and provide a new theory according to which machines learn a function when the machines successfully compute it. We show that this theory challenges common assumptions in the statistical and the computational learning theories, for it implies that learning true probabilities is equivalent neither to obtaining a correct calculation of the true probabilities nor to obtaining an almost-sure convergence to them. We also briefly discuss some case studies from natural language processing and macroeconomics from the perspective of the new theory.
Related papers
- Review and Prospect of Algebraic Research in Equivalent Framework between Statistical Mechanics and Machine Learning Theory [0.0]
This paper is devoted to the memory of Professor Huzihiro Araki who is a pioneer founder of algebraic research in both statistical mechanics and quantum field theory.
arXiv Detail & Related papers (2024-05-31T11:04:13Z) - Machine Learning of the Prime Distribution [49.84018914962972]
We provide a theoretical argument explaining the experimental observations of Yang-Hui He about the learnability of primes.
We also posit that the ErdHos-Kac law would very unlikely be discovered by current machine learning techniques.
arXiv Detail & Related papers (2024-03-19T09:47:54Z) - Three Conjectures on Unexpectedeness [0.5874142059884521]
This paper lays the groundwork for three theoretical conjectures.
First, unexpectedness can be seen as a generalization of Bayes' rule.
Second, the frequentist core of unexpectedness can be connected to the function of tracking ergodic properties of the world.
arXiv Detail & Related papers (2023-11-15T08:24:41Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - On the Joint Interaction of Models, Data, and Features [82.60073661644435]
We introduce a new tool, the interaction tensor, for empirically analyzing the interaction between data and model through features.
Based on these observations, we propose a conceptual framework for feature learning.
Under this framework, the expected accuracy for a single hypothesis and agreement for a pair of hypotheses can both be derived in closed-form.
arXiv Detail & Related papers (2023-06-07T21:35:26Z) - The role of prior information and computational power in Machine
Learning [0.0]
We discuss how prior information and computational power can be employed to solve a learning problem.
We argue that employing high computational power has the advantage of a higher performance.
arXiv Detail & Related papers (2022-10-31T20:39:53Z) - Incompatibility of observables, channels and instruments in information
theories [68.8204255655161]
We study the notion of compatibility for tests of an operational probabilistic theory.
We show that a theory admits of incompatible tests if and only if some information cannot be extracted without disturbance.
arXiv Detail & Related papers (2022-04-17T08:44:29Z) - Computation in a general physical setting [0.0]
This paper reviews and extends some results on the computational ability of quantum theory.
It provides a refined version of the conjecture that a quantum computer can simulate the computation in any theory.
It ends by describing an important relation between this conjecture and delegated computation, similar to the relation between quantum non-locality and device-independent cryptography.
arXiv Detail & Related papers (2021-08-25T20:00:20Z) - Constrained Learning with Non-Convex Losses [119.8736858597118]
Though learning has become a core technology of modern information processing, there is now ample evidence that it can lead to biased, unsafe, and prejudiced solutions.
arXiv Detail & Related papers (2021-03-08T23:10:33Z) - Quantum field-theoretic machine learning [0.0]
We recast the $phi4$ scalar field theory as a machine learning algorithm within the mathematically rigorous framework of Markov random fields.
Neural networks are additionally derived from the $phi4$ theory which can be viewed as generalizations of conventional neural networks.
arXiv Detail & Related papers (2021-02-18T16:12:51Z) - A Note on High-Probability versus In-Expectation Guarantees of
Generalization Bounds in Machine Learning [95.48744259567837]
Statistical machine learning theory often tries to give generalization guarantees of machine learning models.
Statements made about the performance of machine learning models have to take the sampling process into account.
We show how one may transform one statement to another.
arXiv Detail & Related papers (2020-10-06T09:41:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.