Context sequence theory: a common explanation for multiple types of
learning
- URL: http://arxiv.org/abs/2208.04707v1
- Date: Sun, 17 Jul 2022 12:51:52 GMT
- Title: Context sequence theory: a common explanation for multiple types of
learning
- Authors: Yu Mingcan and Wang Junying
- Abstract summary: We propose the context sequence theory to give a common explanation for multiple types of learning in mammals.
We hope that can provide a new insight into the construct of machine learning models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although principles of neuroscience like reinforcement learning, visual
perception and attention have been applied in machine learning models, there is
a huge gap between machine learning and mammalian learning. Based on the
advances in neuroscience, we propose the context sequence theory to give a
common explanation for multiple types of learning in mammals and hope that can
provide a new insight into the construct of machine learning models.
Related papers
- Machine Learning: a Lecture Note [51.31735291774885]
This lecture note is intended to prepare early-year master's and PhD students in data science or a related discipline with foundational ideas in machine learning.<n>It starts with basic ideas in modern machine learning with classification as a main target task.<n>Based on these basic ideas, the lecture note explores in depth the probablistic approach to unsupervised learning.
arXiv Detail & Related papers (2025-05-06T16:03:41Z) - Predictive Learning in Energy-based Models with Attractor Structures [5.542697199599134]
We introduce a framework that employs an energy-based model (EBM) to capture the nuanced processes of predicting observation after action within the neural system.<n>In experimental evaluations, our model demonstrates efficacy across diverse scenarios.
arXiv Detail & Related papers (2025-01-23T11:04:25Z) - A spring-block theory of feature learning in deep neural networks [11.396919965037636]
Feature-learning deep nets progressively collapse data to a regular low-dimensional geometry.
We show how this phenomenon emerges from collective action of nonlinearity, noise, learning rate, and other choices that shape the dynamics.
We propose a macroscopic mechanical theory that reproduces the diagram, explaining why some DNNs are lazy and some active, and linking feature learning across layers to generalization.
arXiv Detail & Related papers (2024-07-28T00:07:20Z) - Curriculum effects and compositionality emerge with in-context learning in neural networks [15.744573869783972]
We show that networks capable of "in-context learning" (ICL) can reproduce human-like learning and compositional behavior on rule-governed tasks.
Our work shows how emergent ICL can equip neural networks with fundamentally different learning properties than those traditionally attributed to them.
arXiv Detail & Related papers (2024-02-13T18:55:27Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - An introduction to reinforcement learning for neuroscience [5.0401589279256065]
Reinforcement learning has a rich history in neuroscience, from early work on dopamine as a reward prediction error signal for temporal difference learning.
Recent work suggests that dopamine could implement a form of 'distributional reinforcement learning' popularized in deep learning.
arXiv Detail & Related papers (2023-11-13T13:10:52Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Foundations and Recent Trends in Multimodal Machine Learning:
Principles, Challenges, and Open Questions [68.6358773622615]
This paper provides an overview of the computational and theoretical foundations of multimodal machine learning.
We propose a taxonomy of 6 core technical challenges: representation, alignment, reasoning, generation, transference, and quantification.
Recent technical achievements will be presented through the lens of this taxonomy, allowing researchers to understand the similarities and differences across new approaches.
arXiv Detail & Related papers (2022-09-07T19:21:19Z) - Cognitively Inspired Learning of Incremental Drifting Concepts [31.3178953771424]
Inspired by the nervous system learning mechanisms, we develop a computational model that enables a deep neural network to learn new concepts.
Our model can generate pseudo-data points for experience replay and accumulate new experiences to past learned experiences without causing cross-task interference.
arXiv Detail & Related papers (2021-10-09T23:26:29Z) - Ten Quick Tips for Deep Learning in Biology [116.78436313026478]
Machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling.
Deep learning has become its own subfield of machine learning.
In the context of biological research, deep learning has been increasingly used to derive novel insights from high-dimensional biological data.
arXiv Detail & Related papers (2021-05-29T21:02:44Z) - The Autodidactic Universe [0.8795040582681388]
We present an approach to cosmology in which the Universe learns its own physical laws.
We discover maps that put each of these matrix models in correspondence with both a gauge/gravity theory and a mathematical model of a learning machine.
We discuss in detail what it means to say that learning takes place in autodidactic systems, where there is no supervision.
arXiv Detail & Related papers (2021-03-29T02:25:02Z) - Measuring and modeling the motor system with machine learning [117.44028458220427]
The utility of machine learning in understanding the motor system is promising a revolution in how to collect, measure, and analyze data.
We discuss the growing use of machine learning: from pose estimation, kinematic analyses, dimensionality reduction, and closed-loop feedback, to its use in understanding neural correlates and untangling sensorimotor systems.
arXiv Detail & Related papers (2021-03-22T12:42:16Z) - Learning Compositional Rules via Neural Program Synthesis [67.62112086708859]
We present a neuro-symbolic model which learns entire rule systems from a small set of examples.
Instead of directly predicting outputs from inputs, we train our model to induce the explicit system of rules governing a set of previously seen examples.
arXiv Detail & Related papers (2020-03-12T01:06:48Z) - Machine Education: Designing semantically ordered and ontologically
guided modular neural networks [5.018156030818882]
We first discuss selected attempts to date on machine teaching and education.
We then bring theories and methodologies together from human education to structure and mathematically define the core problems in lesson design for machine education.
arXiv Detail & Related papers (2020-02-07T09:43:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.