On Physical Origins of Learning
- URL: http://arxiv.org/abs/2310.02375v1
- Date: Thu, 27 Jul 2023 19:45:19 GMT
- Title: On Physical Origins of Learning
- Authors: Alex Ushveridze
- Abstract summary: We propose that learning may have non-biological and non-evolutionary origin.
It turns out that key properties of learning can be observed, explained, and accurately reproduced within simple physical models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The quest to comprehend the origins of intelligence raises intriguing
questions about the evolution of learning abilities in natural systems. Why do
living organisms possess an inherent drive to acquire knowledge of the unknown?
Is this motivation solely explicable through natural selection, favoring
systems capable of learning due to their increased chances of survival? Or do
there exist additional, more rapid mechanisms that offer immediate rewards to
systems entering the "learning mode" in the "right ways"? This article explores
the latter possibility and endeavors to unravel the possible nature of these
ways. We propose that learning may have non-biological and non-evolutionary
origin. It turns out that key properties of learning can be observed,
explained, and accurately reproduced within simple physical models that
describe energy accumulation mechanisms in open resonant-type systems with
dissipation.
Related papers
- Automated Explanation Selection for Scientific Discovery [0.0]
We propose a cycle of scientific discovery that combines machine learning with automated reasoning for the generation and the selection of explanations.
We present a taxonomy of explanation selection problems that draws on insights from sociology and cognitive science.
arXiv Detail & Related papers (2024-07-24T17:41:32Z) - Open-world Machine Learning: A Review and New Outlooks [83.6401132743407]
This paper aims to provide a comprehensive introduction to the emerging open-world machine learning paradigm.
It aims to help researchers build more powerful AI systems in their respective fields, and to promote the development of artificial general intelligence.
arXiv Detail & Related papers (2024-03-04T06:25:26Z) - Nature-Inspired Local Propagation [68.63385571967267]
Natural learning processes rely on mechanisms where data representation and learning are intertwined in such a way as to respect locality.
We show that the algorithmic interpretation of the derived "laws of learning", which takes the structure of Hamiltonian equations, reduces to Backpropagation when the speed of propagation goes to infinity.
This opens the doors to machine learning based on full on-line information that are based the replacement of Backpropagation with the proposed local algorithm.
arXiv Detail & Related papers (2024-02-04T21:43:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - What are the mechanisms underlying metacognitive learning? [5.787117733071415]
We postulate that people learn this ability from trial and error (metacognitive reinforcement learning)
Here, we systematize models of the underlying learning mechanisms and enhance them with more sophisticated additional mechanisms.
Our results suggest that a gradient ascent through the space of cognitive strategies can explain most of the observed qualitative phenomena.
arXiv Detail & Related papers (2023-02-09T18:49:10Z) - Programming molecular systems to emulate a learning spiking neuron [1.2707050104493216]
Hebbian theory seeks to explain how the neurons in the brain adapt to stimuli, to enable learning.
This paper explores how molecular systems can be designed to show such proto-intelligent behaviours.
We propose the first chemical reaction network that can exhibit autonomous Hebbian learning across arbitrarily many input channels.
arXiv Detail & Related papers (2022-05-09T09:21:40Z) - Properties from Mechanisms: An Equivariance Perspective on Identifiable
Representation Learning [79.4957965474334]
Key goal of unsupervised representation learning is "inverting" a data generating process to recover its latent properties.
This paper asks, "Can we instead identify latent properties by leveraging knowledge of the mechanisms that govern their evolution?"
We provide a complete characterization of the sources of non-identifiability as we vary knowledge about a set of possible mechanisms.
arXiv Detail & Related papers (2021-10-29T14:04:08Z) - Evolutionary Self-Replication as a Mechanism for Producing Artificial
Intelligence [0.0]
Self-replication is explored as a mechanism for the emergence of intelligent behavior in modern learning environments.
Atari and robotic learning environments are re-defined in terms of natural selection.
arXiv Detail & Related papers (2021-09-16T15:40:20Z) - Applying Deutsch's concept of good explanations to artificial
intelligence and neuroscience -- an initial exploration [0.0]
We investigate Deutsch's hard-to-vary principle and how it relates to more formalized principles in deep learning.
We look at what role hard-tovary explanations play in intelligence by looking at the human brain.
arXiv Detail & Related papers (2020-12-16T23:23:22Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.