ROSE: A Neurocomputational Architecture for Syntax
- URL: http://arxiv.org/abs/2303.08877v1
- Date: Wed, 15 Mar 2023 18:44:37 GMT
- Title: ROSE: A Neurocomputational Architecture for Syntax
- Authors: Elliot Murphy
- Abstract summary: This article proposes a neurocomputational architecture for syntax, termed the ROSE model.
Under ROSE, the basic data structures of syntax are atomic features, types of mental representations (R), and are coded at the single-unit and ensemble level.
Distinct forms of low frequency coupling and phase-amplitude coupling (delta-theta coupling via pSTS-IFG; theta-gamma coupling via IFG to conceptual hubs) then encode these structures onto distinct workspaces (E)
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A comprehensive model of natural language processing in the brain must
accommodate four components: representations, operations, structures and
encoding. It further requires a principled account of how these components
mechanistically, and causally, relate to each another. While previous models
have isolated regions of interest for structure-building and lexical access,
many gaps remain with respect to bridging distinct scales of neural complexity.
By expanding existing accounts of how neural oscillations can index various
linguistic processes, this article proposes a neurocomputational architecture
for syntax, termed the ROSE model (Representation, Operation, Structure,
Encoding). Under ROSE, the basic data structures of syntax are atomic features,
types of mental representations (R), and are coded at the single-unit and
ensemble level. Elementary computations (O) that transform these units into
manipulable objects accessible to subsequent structure-building levels are
coded via high frequency gamma activity. Low frequency synchronization and
cross-frequency coupling code for recursive categorial inferences (S). Distinct
forms of low frequency coupling and phase-amplitude coupling (delta-theta
coupling via pSTS-IFG; theta-gamma coupling via IFG to conceptual hubs) then
encode these structures onto distinct workspaces (E). Causally connecting R to
O is spike-phase/LFP coupling; connecting O to S is phase-amplitude coupling;
connecting S to E is a system of frontotemporal traveling oscillations;
connecting E to lower levels is low-frequency phase resetting of spike-LFP
coupling. ROSE is reliant on neurophysiologically plausible mechanisms, is
supported at all four levels by a range of recent empirical research, and
provides an anatomically precise and falsifiable grounding for the basic
property of natural language syntax: hierarchical, recursive
structure-building.
Related papers
- SFC-GAN: A Generative Adversarial Network for Brain Functional and Structural Connectome Translation [11.070185680800071]
Structural-Functional Connectivity GAN (SFC-GAN) is a novel framework for bidirectional translation between brain connectomes.
To preserve the topological integrity of these connectomes, we employ a structure-preserving loss that guides the model in capturing both global and local connectome patterns.
Our framework demonstrates superior performance in translating between SC and FC, outperforming baseline models in similarity and graph property evaluations.
arXiv Detail & Related papers (2025-01-13T04:30:41Z) - Shadow of the (Hierarchical) Tree: Reconciling Symbolic and Predictive Components of the Neural Code for Syntax [1.223779595809275]
I discuss the prospects of reconciling the neural code for hierarchical'vertical' syntax with linear and predictive 'horizontal' processes.
I provide a neurosymbolic mathematical model for how to inject symbolic representations into a neural regime encoding lexico-semantic statistical features.
arXiv Detail & Related papers (2024-12-02T08:44:16Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - On The Expressivity of Recurrent Neural Cascades [48.87943990557107]
Recurrent Neural Cascades (RNCs) are the recurrent neural networks with no cyclic dependencies among recurrent neurons.
We show that RNCs can achieve the expressivity of all regular languages by introducing neurons that can implement groups.
arXiv Detail & Related papers (2023-12-14T15:47:26Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Neural-Symbolic Recursive Machine for Systematic Generalization [113.22455566135757]
We introduce the Neural-Symbolic Recursive Machine (NSR), whose core is a Grounded Symbol System (GSS)
NSR integrates neural perception, syntactic parsing, and semantic reasoning.
We evaluate NSR's efficacy across four challenging benchmarks designed to probe systematic generalization capabilities.
arXiv Detail & Related papers (2022-10-04T13:27:38Z) - Spatio-Temporal Representation Factorization for Video-based Person
Re-Identification [55.01276167336187]
We propose Spatio-Temporal Representation Factorization module (STRF) for re-ID.
STRF is a flexible new computational unit that can be used in conjunction with most existing 3D convolutional neural network architectures for re-ID.
We empirically show that STRF improves performance of various existing baseline architectures while demonstrating new state-of-the-art results.
arXiv Detail & Related papers (2021-07-25T19:29:37Z) - Discrete-Valued Neural Communication [85.3675647398994]
We show that restricting the transmitted information among components to discrete representations is a beneficial bottleneck.
Even though individuals have different understandings of what a "cat" is based on their specific experiences, the shared discrete token makes it possible for communication among individuals to be unimpeded by individual differences in internal representation.
We extend the quantization mechanism from the Vector-Quantized Variational Autoencoder to multi-headed discretization with shared codebooks and use it for discrete-valued neural communication.
arXiv Detail & Related papers (2021-07-06T03:09:25Z) - A STDP-based Encoding Algorithm for Associative and Composite Data [0.0]
This work proposes a practical memory model based on STDP that can store and retrieve high-dimensional associative data.
The model combines STDP dynamics with an encoding scheme for distributed representations and can handle multiple composite data in a continuous manner.
arXiv Detail & Related papers (2021-04-25T20:26:52Z) - Separation of Memory and Processing in Dual Recurrent Neural Networks [0.0]
We explore a neural network architecture that stacks a recurrent layer and a feedforward layer that is also connected to the input.
When noise is introduced into the activation function of the recurrent units, these neurons are forced into a binary activation regime that makes the networks behave much as finite automata.
arXiv Detail & Related papers (2020-05-17T11:38:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.