Deep Learning and the Global Workspace Theory
- URL: http://arxiv.org/abs/2012.10390v2
- Date: Sat, 20 Feb 2021 00:33:38 GMT
- Title: Deep Learning and the Global Workspace Theory
- Authors: Rufin VanRullen and Ryota Kanai
- Abstract summary: Recent advances in deep learning have allowed Artificial Intelligence to reach near human-level performance in many sensory, perceptual, linguistic or cognitive tasks.
There is a growing need, however, for novel, brain-inspired cognitive architectures.
The Global Workspace theory refers to a large-scale system integrating and distributing information among networks of specialized modules to create higher-level forms of cognition and awareness.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in deep learning have allowed Artificial Intelligence (AI) to
reach near human-level performance in many sensory, perceptual, linguistic or
cognitive tasks. There is a growing need, however, for novel, brain-inspired
cognitive architectures. The Global Workspace theory refers to a large-scale
system integrating and distributing information among networks of specialized
modules to create higher-level forms of cognition and awareness. We argue that
the time is ripe to consider explicit implementations of this theory using deep
learning techniques. We propose a roadmap based on unsupervised neural
translation between multiple latent spaces (neural networks trained for
distinct tasks, on distinct sensory inputs and/or modalities) to create a
unique, amodal global latent workspace (GLW). Potential functional advantages
of GLW are reviewed, along with neuroscientific implications.
Related papers
- Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Unveiling A Core Linguistic Region in Large Language Models [49.860260050718516]
This paper conducts an analogical research using brain localization as a prototype.
We have discovered a core region in large language models that corresponds to linguistic competence.
We observe that an improvement in linguistic competence does not necessarily accompany an elevation in the model's knowledge level.
arXiv Detail & Related papers (2023-10-23T13:31:32Z) - Brain-inspired learning in artificial neural networks: a review [5.064447369892274]
We review current brain-inspired learning representations in artificial neural networks.
We investigate the integration of more biologically plausible mechanisms, such as synaptic plasticity, to enhance these networks' capabilities.
arXiv Detail & Related papers (2023-05-18T18:34:29Z) - Interplay between depth of neural networks and locality of target
functions [5.33024001730262]
We report a remarkable interplay between depth and locality of a target function.
We find that depth is beneficial for learning local functions but detrimental to learning global functions.
arXiv Detail & Related papers (2022-01-28T12:41:24Z) - Towards continual task learning in artificial neural networks: current
approaches and insights from neuroscience [0.0]
The innate capacity of humans and other animals to learn a diverse, and often interfering, range of knowledge is a hallmark of natural intelligence.
The ability of artificial neural networks to learn across a range of tasks and domains is a clear goal of artificial intelligence.
arXiv Detail & Related papers (2021-12-28T13:50:51Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Cognitively Inspired Learning of Incremental Drifting Concepts [31.3178953771424]
Inspired by the nervous system learning mechanisms, we develop a computational model that enables a deep neural network to learn new concepts.
Our model can generate pseudo-data points for experience replay and accumulate new experiences to past learned experiences without causing cross-task interference.
arXiv Detail & Related papers (2021-10-09T23:26:29Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - What can linearized neural networks actually say about generalization? [67.83999394554621]
In certain infinitely-wide neural networks, the neural tangent kernel (NTK) theory fully characterizes generalization.
We show that the linear approximations can indeed rank the learning complexity of certain tasks for neural networks.
Our work provides concrete examples of novel deep learning phenomena which can inspire future theoretical research.
arXiv Detail & Related papers (2021-06-12T13:05:11Z) - The Unreasonable Effectiveness of Deep Learning in Artificial
Intelligence [1.5229257192293197]
Deep learning networks have been trained to recognize speech, caption photographs and translate text between languages at high levels of performance.
Deep learning was inspired by the architecture of the cortex and may be found in other brain regions that are essential for planning and survival.
arXiv Detail & Related papers (2020-02-12T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.