The Relativity of Induction
- URL: http://arxiv.org/abs/2009.10613v1
- Date: Tue, 22 Sep 2020 15:17:26 GMT
- Title: The Relativity of Induction
- Authors: Larry Muhlstein
- Abstract summary: We show that Occam's razor and parsimony principles are insufficient to ground learning.
We derive and demonstrate a set of relativistic principles that yield clearer insight into the nature and dynamics of learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lately there has been a lot of discussion about why deep learning algorithms
perform better than we would theoretically suspect. To get insight into this
question, it helps to improve our understanding of how learning works. We
explore the core problem of generalization and show that long-accepted Occam's
razor and parsimony principles are insufficient to ground learning. Instead, we
derive and demonstrate a set of relativistic principles that yield clearer
insight into the nature and dynamics of learning. We show that concepts of
simplicity are fundamentally contingent, that all learning operates relative to
an initial guess, and that generalization cannot be measured or strongly
inferred, but that it can be expected given enough observation. Using these
principles, we reconstruct our understanding in terms of distributed learning
systems whose components inherit beliefs and update them. We then apply this
perspective to elucidate the nature of some real world inductive processes
including deep learning.
Related papers
- Chain-of-Knowledge: Integrating Knowledge Reasoning into Large Language Models by Learning from Knowledge Graphs [55.317267269115845]
Chain-of-Knowledge (CoK) is a comprehensive framework for knowledge reasoning.
CoK includes methodologies for both dataset construction and model learning.
We conduct extensive experiments with KnowReason.
arXiv Detail & Related papers (2024-06-30T10:49:32Z) - What's in an embedding? Would a rose by any embedding smell as sweet? [0.0]
Large Language Models (LLMs) are often criticized for lacking true "understanding" and the ability to "reason" with their knowledge.
We suggest that LLMs do develop a kind of empirical "understanding" that is "geometry"-like, which seems adequate for a range of applications in NLP.
To overcome these limitations, we suggest that LLMs should be integrated with an "algebraic" representation of knowledge that includes symbolic AI elements.
arXiv Detail & Related papers (2024-06-11T01:10:40Z) - Learning principle and mathematical realization of the learning
mechanism in the brain [0.0]
We call it learning principle, and it follows that all learning is equivalent to estimating the probability of input data.
We show that conventional supervised learning is equivalent to estimating conditional probabilities, and succeeded in making supervised learning more effective and generalized.
We propose a new method of defining the values of estimated probability using differentiation, and show that unsupervised learning can be performed on arbitrary dataset without any prior knowledge.
arXiv Detail & Related papers (2023-11-22T12:08:01Z) - A Definition of Open-Ended Learning Problems for Goal-Conditioned Agents [18.2920082469313]
We argue that open-ended learning is generally conceived as a composite notion encompassing a set of diverse properties.
We focus on the subset of open-ended goal-conditioned reinforcement learning problems in which agents can learn a growing repertoire of goal-driven skills.
arXiv Detail & Related papers (2023-11-01T07:37:27Z) - Investigating Forgetting in Pre-Trained Representations Through
Continual Learning [51.30807066570425]
We study the effect of representation forgetting on the generality of pre-trained language models.
We find that the generality is destructed in various pre-trained LMs, and syntactic and semantic knowledge is forgotten through continual learning.
arXiv Detail & Related papers (2023-05-10T08:27:59Z) - Learning by Applying: A General Framework for Mathematical Reasoning via
Enhancing Explicit Knowledge Learning [47.96987739801807]
We propose a framework to enhance existing models (backbones) in a principled way by explicit knowledge learning.
In LeAp, we perform knowledge learning in a novel problem-knowledge-expression paradigm.
We show that LeAp improves all backbones' performances, learns accurate knowledge, and achieves a more interpretable reasoning process.
arXiv Detail & Related papers (2023-02-11T15:15:41Z) - A Comprehensive Survey of Continual Learning: Theory, Method and
Application [64.23253420555989]
We present a comprehensive survey of continual learning, seeking to bridge the basic settings, theoretical foundations, representative methods, and practical applications.
We summarize the general objectives of continual learning as ensuring a proper stability-plasticity trade-off and an adequate intra/inter-task generalizability in the context of resource efficiency.
arXiv Detail & Related papers (2023-01-31T11:34:56Z) - HALMA: Humanlike Abstraction Learning Meets Affordance in Rapid Problem
Solving [104.79156980475686]
Humans learn compositional and causal abstraction, ie, knowledge, in response to the structure of naturalistic tasks.
We argue there shall be three levels of generalization in how an agent represents its knowledge: perceptual, conceptual, and algorithmic.
This benchmark is centered around a novel task domain, HALMA, for visual concept development and rapid problem-solving.
arXiv Detail & Related papers (2021-02-22T20:37:01Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Revisit Systematic Generalization via Meaningful Learning [15.90288956294373]
Recent studies argue that neural networks appear inherently ineffective in such cognitive capacity.
We reassess the compositional skills of sequence-to-sequence models conditioned on the semantic links between new and old concepts.
arXiv Detail & Related papers (2020-03-14T15:27:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.