Learning principle and mathematical realization of the learning
mechanism in the brain
- URL: http://arxiv.org/abs/2311.13341v1
- Date: Wed, 22 Nov 2023 12:08:01 GMT
- Title: Learning principle and mathematical realization of the learning
mechanism in the brain
- Authors: Taisuke Katayose
- Abstract summary: We call it learning principle, and it follows that all learning is equivalent to estimating the probability of input data.
We show that conventional supervised learning is equivalent to estimating conditional probabilities, and succeeded in making supervised learning more effective and generalized.
We propose a new method of defining the values of estimated probability using differentiation, and show that unsupervised learning can be performed on arbitrary dataset without any prior knowledge.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep learning has achieved remarkable success, there is no clear
explanation about why it works so well. In order to discuss this question
quantitatively, we need a mathematical framework that explains what learning is
in the first place. After several considerations, we succeeded in constructing
a mathematical framework that can provide a unified understanding of all types
of learning, including deep learning and learning in the brain. We call it
learning principle, and it follows that all learning is equivalent to
estimating the probability of input data. We not only derived this principle,
but also mentioned its application to actual machine learning models. For
example, we found that conventional supervised learning is equivalent to
estimating conditional probabilities, and succeeded in making supervised
learning more effective and generalized. We also proposed a new method of
defining the values of estimated probability using differentiation, and showed
that unsupervised learning can be performed on arbitrary dataset without any
prior knowledge. Namely, this method is a general-purpose machine learning in
the true sense. Moreover, we succeeded in describing the learning mechanism in
the brain by considering the time evolution of a fully or partially connected
model and applying this new method. The learning principle provides solutions
to many unsolved problems in deep learning and cognitive neuroscience.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Learning Beyond Pattern Matching? Assaying Mathematical Understanding in LLMs [58.09253149867228]
This paper assesses the domain knowledge of LLMs through its understanding of different mathematical skills required to solve problems.
Motivated by the use of LLMs as a general scientific assistant, we propose textitNTKEval to assess changes in LLM's probability distribution.
Our systematic analysis finds evidence of domain understanding during in-context learning.
Certain instruction-tuning leads to similar performance changes irrespective of training on different data, suggesting a lack of domain understanding across different skills.
arXiv Detail & Related papers (2024-05-24T12:04:54Z) - Ticketed Learning-Unlearning Schemes [57.89421552780526]
We propose a new ticketed model for learning--unlearning.
We provide space-efficient ticketed learning--unlearning schemes for a broad family of concept classes.
arXiv Detail & Related papers (2023-06-27T18:54:40Z) - Learning by Applying: A General Framework for Mathematical Reasoning via
Enhancing Explicit Knowledge Learning [47.96987739801807]
We propose a framework to enhance existing models (backbones) in a principled way by explicit knowledge learning.
In LeAp, we perform knowledge learning in a novel problem-knowledge-expression paradigm.
We show that LeAp improves all backbones' performances, learns accurate knowledge, and achieves a more interpretable reasoning process.
arXiv Detail & Related papers (2023-02-11T15:15:41Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Continual Learning with Deep Learning Methods in an Application-Oriented
Context [0.0]
An important research area of Artificial Intelligence (AI) deals with the automatic derivation of knowledge from data.
One type of machine learning algorithms that can be categorized as "deep learning" model is referred to as Deep Neural Networks (DNNs)
DNNs are affected by a problem that prevents new knowledge from being added to an existing base.
arXiv Detail & Related papers (2022-07-12T10:13:33Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - From Undecidability of Non-Triviality and Finiteness to Undecidability
of Learnability [0.0]
We show that there is no general-purpose procedure for rigorously evaluating whether newly proposed models indeed successfully learn from data.
For PAC binary classification, uniform and universal online learning, and exact learning through teacher-learner interactions, learnability is in general undecidable.
There is no one-size-fits-all algorithm for deciding whether a machine learning model can be successful.
arXiv Detail & Related papers (2021-06-02T18:00:04Z) - The Relativity of Induction [0.0]
We show that Occam's razor and parsimony principles are insufficient to ground learning.
We derive and demonstrate a set of relativistic principles that yield clearer insight into the nature and dynamics of learning.
arXiv Detail & Related papers (2020-09-22T15:17:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.