Meta Neural Coordination
- URL: http://arxiv.org/abs/2305.12109v1
- Date: Sat, 20 May 2023 06:06:44 GMT
- Title: Meta Neural Coordination
- Authors: Yuwei Sun
- Abstract summary: Meta-learning aims to develop algorithms that can learn from other learning algorithms to adapt to new and changing environments.
Uncertainty in the predictions of conventional deep neural networks highlights the partial predictability of the world.
We discuss the potential advancements required to build biologically-inspired machine intelligence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Meta-learning aims to develop algorithms that can learn from other learning
algorithms to adapt to new and changing environments. This requires a model of
how other learning algorithms operate and perform in different contexts, which
is similar to representing and reasoning about mental states in the theory of
mind. Furthermore, the problem of uncertainty in the predictions of
conventional deep neural networks highlights the partial predictability of the
world, requiring the representation of multiple predictions simultaneously.
This is facilitated by coordination among neural modules, where different
modules' beliefs and desires are attributed to others. The neural coordination
among modular and decentralized neural networks is a fundamental prerequisite
for building autonomous intelligence machines that can interact flexibly and
adaptively. In this work, several pieces of evidence demonstrate a new avenue
for tackling the problems above, termed Meta Neural Coordination. We discuss
the potential advancements required to build biologically-inspired machine
intelligence, drawing from both machine learning and cognitive science
communities.
Related papers
- Rhythmic sharing: A bio-inspired paradigm for zero-shot adaptation and learning in neural networks [0.0]
We develop a learning paradigm that is based on oscillations in link strengths and associates learning with the coordination of these oscillations.
We find that this paradigm yields rapid adaptation and learning in artificial neural networks.
Our study opens the door for introducing rapid adaptation and learning capabilities into leading AI models.
arXiv Detail & Related papers (2025-02-12T18:58:34Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
It has long been known in both neuroscience and AI that ''binding'' between neurons leads to a form of competitive learning.
We introduce Artificial rethinking together with arbitrary connectivity designs such as fully connected convolutional, or attentive mechanisms.
We show that this idea provides performance improvements across a wide spectrum of tasks such as unsupervised object discovery, adversarial robustness, uncertainty, and reasoning.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Reasoning Algorithmically in Graph Neural Networks [1.8130068086063336]
We aim to integrate the structured and rule-based reasoning of algorithms with adaptive learning capabilities of neural networks.
This dissertation provides theoretical and practical contributions to this area of research.
arXiv Detail & Related papers (2024-02-21T12:16:51Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Meta Learning in Decentralized Neural Networks: Towards More General AI [0.0]
We aim to provide a fundamental understanding of learning to learn in the contents of Decentralized Neural Networks (Decentralized NNs)
We will present three different approaches to building such a decentralized learning system.
arXiv Detail & Related papers (2023-02-02T11:15:07Z) - A brain basis of dynamical intelligence for AI and computational
neuroscience [0.0]
More brain-like capacities may demand new theories, models, and methods for designing artificial learning systems.
This article was inspired by our symposium on dynamical neuroscience and machine learning at the 6th Annual US/NIH BRAIN Initiative Investigators Meeting.
arXiv Detail & Related papers (2021-05-15T19:49:32Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Brain-inspired global-local learning incorporated with neuromorphic
computing [35.70151531581922]
We report a neuromorphic hybrid learning model by introducing a brain-inspired meta-learning paradigm and a differentiable spiking model incorporating neuronal dynamics and synaptic plasticity.
We demonstrate the advantages of this model in multiple different tasks, including few-shot learning, continual learning, and fault-tolerance learning in neuromorphic vision sensors.
arXiv Detail & Related papers (2020-06-05T04:24:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.