Context sequence theory: a common explanation for multiple types of
learning
- URL: http://arxiv.org/abs/2208.04707v1
- Date: Sun, 17 Jul 2022 12:51:52 GMT
- Title: Context sequence theory: a common explanation for multiple types of
learning
- Authors: Yu Mingcan and Wang Junying
- Abstract summary: We propose the context sequence theory to give a common explanation for multiple types of learning in mammals.
We hope that can provide a new insight into the construct of machine learning models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although principles of neuroscience like reinforcement learning, visual
perception and attention have been applied in machine learning models, there is
a huge gap between machine learning and mammalian learning. Based on the
advances in neuroscience, we propose the context sequence theory to give a
common explanation for multiple types of learning in mammals and hope that can
provide a new insight into the construct of machine learning models.
Related papers
- Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - An introduction to reinforcement learning for neuroscience [5.0401589279256065]
Reinforcement learning has a rich history in neuroscience, from early work on dopamine as a reward prediction error signal for temporal difference learning.
Recent work suggests that dopamine could implement a form of 'distributional reinforcement learning' popularized in deep learning.
arXiv Detail & Related papers (2023-11-13T13:10:52Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Foundations and Recent Trends in Multimodal Machine Learning:
Principles, Challenges, and Open Questions [68.6358773622615]
This paper provides an overview of the computational and theoretical foundations of multimodal machine learning.
We propose a taxonomy of 6 core technical challenges: representation, alignment, reasoning, generation, transference, and quantification.
Recent technical achievements will be presented through the lens of this taxonomy, allowing researchers to understand the similarities and differences across new approaches.
arXiv Detail & Related papers (2022-09-07T19:21:19Z) - On the Role of Neural Collapse in Transfer Learning [29.972063833424215]
Recent results show that representations learned by a single classifier over many classes are competitive on few-shot learning problems.
We show that neural collapse generalizes to new samples from the training classes, and -- more importantly -- to new classes as well.
arXiv Detail & Related papers (2021-12-30T16:36:26Z) - Cognitively Inspired Learning of Incremental Drifting Concepts [31.3178953771424]
Inspired by the nervous system learning mechanisms, we develop a computational model that enables a deep neural network to learn new concepts.
Our model can generate pseudo-data points for experience replay and accumulate new experiences to past learned experiences without causing cross-task interference.
arXiv Detail & Related papers (2021-10-09T23:26:29Z) - Ten Quick Tips for Deep Learning in Biology [116.78436313026478]
Machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling.
Deep learning has become its own subfield of machine learning.
In the context of biological research, deep learning has been increasingly used to derive novel insights from high-dimensional biological data.
arXiv Detail & Related papers (2021-05-29T21:02:44Z) - The Autodidactic Universe [0.8795040582681388]
We present an approach to cosmology in which the Universe learns its own physical laws.
We discover maps that put each of these matrix models in correspondence with both a gauge/gravity theory and a mathematical model of a learning machine.
We discuss in detail what it means to say that learning takes place in autodidactic systems, where there is no supervision.
arXiv Detail & Related papers (2021-03-29T02:25:02Z) - Measuring and modeling the motor system with machine learning [117.44028458220427]
The utility of machine learning in understanding the motor system is promising a revolution in how to collect, measure, and analyze data.
We discuss the growing use of machine learning: from pose estimation, kinematic analyses, dimensionality reduction, and closed-loop feedback, to its use in understanding neural correlates and untangling sensorimotor systems.
arXiv Detail & Related papers (2021-03-22T12:42:16Z) - Learning Compositional Rules via Neural Program Synthesis [67.62112086708859]
We present a neuro-symbolic model which learns entire rule systems from a small set of examples.
Instead of directly predicting outputs from inputs, we train our model to induce the explicit system of rules governing a set of previously seen examples.
arXiv Detail & Related papers (2020-03-12T01:06:48Z) - Machine Education: Designing semantically ordered and ontologically
guided modular neural networks [5.018156030818882]
We first discuss selected attempts to date on machine teaching and education.
We then bring theories and methodologies together from human education to structure and mathematically define the core problems in lesson design for machine education.
arXiv Detail & Related papers (2020-02-07T09:43:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.