Architecture of a Cortex Inspired Hierarchical Event Recaller
- URL: http://arxiv.org/abs/2405.02371v1
- Date: Fri, 3 May 2024 09:36:16 GMT
- Title: Architecture of a Cortex Inspired Hierarchical Event Recaller
- Authors: Valentin Puente Varona,
- Abstract summary: This paper proposes a new approach to Machine Learning (ML) that focuses on unsupervised continuous context-dependent learning of complex patterns.
A synthetic structure capable of identifying and predicting complex temporal series will be defined and experimentally tested.
As a proof of concept, the proposed system is shown to be able to learn, identify and predict a remarkably complex temporal series such as human speech, with no prior knowledge.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper proposes a new approach to Machine Learning (ML) that focuses on unsupervised continuous context-dependent learning of complex patterns. Although the proposal is partly inspired by some of the current knowledge about the structural and functional properties of the mammalian brain, we do not claim that biological systems work in an analogous way (nor the opposite). Based on some properties of the cerebellar cortex and adjacent structures, a proposal suitable for practical problems is presented. A synthetic structure capable of identifying and predicting complex temporal series will be defined and experimentally tested. The system relies heavily on prediction to help identify and learn patterns based on previously acquired contextual knowledge. As a proof of concept, the proposed system is shown to be able to learn, identify and predict a remarkably complex temporal series such as human speech, with no prior knowledge. From raw data, without any adaptation in the core algorithm, the system is able to identify certain speech structures from a set of Spanish sentences. Unlike conventional ML, the proposal can learn with a reduced training set. Although the idea can be applied to a constrained problem, such as the detection of unknown vocabulary in a speech, it could be used in more applications, such as vision, or (by incorporating the missing biological periphery) fit into other ML techniques. Given the trivial computational primitives used, a potential hardware implementation will be remarkably frugal. Coincidentally, the proposed model not only conforms to a plausible functional framework for biological systems but may also explain many elusive cognitive phenomena.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - Fundamental Components of Deep Learning: A category-theoretic approach [0.0]
This thesis develops a novel mathematical foundation for deep learning based on the language of category theory.
We also systematise many existing approaches, placing many existing constructions and concepts under the same umbrella.
arXiv Detail & Related papers (2024-03-13T01:29:40Z) - Large Language Models for Scientific Synthesis, Inference and
Explanation [56.41963802804953]
We show how large language models can perform scientific synthesis, inference, and explanation.
We show that the large language model can augment this "knowledge" by synthesizing from the scientific literature.
This approach has the further advantage that the large language model can explain the machine learning system's predictions.
arXiv Detail & Related papers (2023-10-12T02:17:59Z) - A Theory of Unsupervised Speech Recognition [60.12287608968879]
Unsupervised speech recognition (ASR-U) is the problem of learning automatic speech recognition systems from unpaired speech-only and text-only corpora.
We propose a general theoretical framework to study the properties of ASR-U systems based on random matrix theory and the theory of neural tangent kernels.
arXiv Detail & Related papers (2023-06-09T08:12:27Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - Multi-Agent Reinforcement Learning with Temporal Logic Specifications [65.79056365594654]
We study the problem of learning to satisfy temporal logic specifications with a group of agents in an unknown environment.
We develop the first multi-agent reinforcement learning technique for temporal logic specifications.
We provide correctness and convergence guarantees for our main algorithm.
arXiv Detail & Related papers (2021-02-01T01:13:03Z) - Modelling Compositionality and Structure Dependence in Natural Language [0.12183405753834563]
Drawing on linguistics and set theory, a formalisation of these ideas is presented in the first half of this thesis.
We see how cognitive systems that process language need to have certain functional constraints.
Using the advances of word embedding techniques, a model of relational learning is simulated.
arXiv Detail & Related papers (2020-11-22T17:28:50Z) - Generalizing Outside the Training Set: When Can Neural Networks Learn
Identity Effects? [1.2891210250935143]
We show that a class of algorithms including deep neural networks with standard architecture and training with backpropagation can generalize to novel inputs.
We demonstrate our theory with computational experiments in which we explore the effect of different input encodings on the ability of algorithms to generalize to novel inputs.
arXiv Detail & Related papers (2020-05-09T01:08:07Z) - Probing Linguistic Systematicity [11.690179162556353]
There is accumulating evidence that neural models often generalize non-systematically.
We identify ways in which network architectures can generalize non-systematically, and discuss why such forms of generalization may be unsatisfying.
arXiv Detail & Related papers (2020-05-08T23:31:31Z) - Explainable AI for Classification using Probabilistic Logic Inference [9.656846523452502]
We present an explainable classification method.
Our method works by first constructing a symbolic Knowledge Base from the training data, and then performing probabilistic inferences on such Knowledge Base with linear programming.
It identifies decisive features that are responsible for a classification as explanations and produces results similar to the ones found by SHAP, a state of the artley Value based method.
arXiv Detail & Related papers (2020-05-05T11:39:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.