Building artificial neural circuits for domain-general cognition: a
primer on brain-inspired systems-level architecture
- URL: http://arxiv.org/abs/2303.13651v1
- Date: Tue, 21 Mar 2023 18:36:17 GMT
- Title: Building artificial neural circuits for domain-general cognition: a
primer on brain-inspired systems-level architecture
- Authors: Jascha Achterberg, Danyal Akarca, Moataz Assem, Moritz Heimbach,
Duncan E. Astle, John Duncan
- Abstract summary: We provide an overview of the hallmarks endowing biological neural networks with the functionality needed for flexible cognition.
As machine learning models become more complex, these principles may provide valuable directions in an otherwise vast space of possible architectures.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is a concerted effort to build domain-general artificial intelligence
in the form of universal neural network models with sufficient computational
flexibility to solve a wide variety of cognitive tasks but without requiring
fine-tuning on individual problem spaces and domains. To do this, models need
appropriate priors and inductive biases, such that trained models can
generalise to out-of-distribution examples and new problem sets. Here we
provide an overview of the hallmarks endowing biological neural networks with
the functionality needed for flexible cognition, in order to establish which
features might also be important to achieve similar functionality in artificial
systems. We specifically discuss the role of system-level distribution of
network communication and recurrence, in addition to the role of short-term
topological changes for efficient local computation. As machine learning models
become more complex, these principles may provide valuable directions in an
otherwise vast space of possible architectures. In addition, testing these
inductive biases within artificial systems may help us to understand the
biological principles underlying domain-general cognition.
Related papers
- Spatial embedding promotes a specific form of modularity with low entropy and heterogeneous spectral dynamics [0.0]
Spatially embedded recurrent neural networks provide a promising avenue to study how modelled constraints shape the combined structural and functional organisation of networks over learning.
We show that it is possible to study these restrictions through entropic measures of the neural weights and eigenspectrum, across both rate and spiking neural networks.
This work deepens our understanding of constrained learning in neural networks, across coding schemes and tasks, where solutions to simultaneous structural and functional objectives must be accomplished in tandem.
arXiv Detail & Related papers (2024-09-26T10:00:05Z) - Reasoning Algorithmically in Graph Neural Networks [1.8130068086063336]
We aim to integrate the structured and rule-based reasoning of algorithms with adaptive learning capabilities of neural networks.
This dissertation provides theoretical and practical contributions to this area of research.
arXiv Detail & Related papers (2024-02-21T12:16:51Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Gaussian Process Surrogate Models for Neural Networks [6.8304779077042515]
In science and engineering, modeling is a methodology used to understand complex systems whose internal processes are opaque.
We construct a class of surrogate models for neural networks using Gaussian processes.
We demonstrate our approach captures existing phenomena related to the spectral bias of neural networks, and then show that our surrogate models can be used to solve practical problems.
arXiv Detail & Related papers (2022-08-11T20:17:02Z) - An Artificial Neural Network Functionalized by Evolution [2.0625936401496237]
We propose a hybrid model which combines the tensor calculus of feed-forward neural networks with Pseudo-Darwinian mechanisms.
This allows for finding topologies that are well adapted for elaboration of strategies, control problems or pattern recognition tasks.
In particular, the model can provide adapted topologies at early evolutionary stages, and'structural convergence', which can found applications in robotics, big-data and artificial life.
arXiv Detail & Related papers (2022-05-16T14:49:58Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Towards fuzzification of adaptation rules in self-adaptive architectures [2.730650695194413]
We focus on exploiting neural networks for the analysis and planning stage in self-adaptive architectures.
One simple option to address such a need is to replace the reasoning based on logical rules with a neural network.
We show how to navigate in this continuum and create a neural network architecture that naturally embeds the original logical rules.
arXiv Detail & Related papers (2021-12-17T12:17:16Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Automated Search for Resource-Efficient Branched Multi-Task Networks [81.48051635183916]
We propose a principled approach, rooted in differentiable neural architecture search, to automatically define branching structures in a multi-task neural network.
We show that our approach consistently finds high-performing branching structures within limited resource budgets.
arXiv Detail & Related papers (2020-08-24T09:49:19Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.