The Architecture of a Biologically Plausible Language Organ
- URL: http://arxiv.org/abs/2306.15364v1
- Date: Tue, 27 Jun 2023 10:25:22 GMT
- Title: The Architecture of a Biologically Plausible Language Organ
- Authors: Daniel Mitropolsky, Christos H. Papadimitriou
- Abstract summary: We present a simulated biologically plausible language organ, made up of stylized but realistic neurons, synapses, brain areas, plasticity, and a simplified model of sensory perception.
We show through experiments that this model succeeds in an important early step in language acquisition.
- Score: 2.5466702304890294
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present a simulated biologically plausible language organ, made up of
stylized but realistic neurons, synapses, brain areas, plasticity, and a
simplified model of sensory perception. We show through experiments that this
model succeeds in an important early step in language acquisition: the learning
of nouns, verbs, and their meanings, from the grounded input of only a modest
number of sentences. Learning in this system is achieved through Hebbian
plasticity, and without backpropagation. Our model goes beyond a parser
previously designed in a similar environment, with the critical addition of a
biologically plausible account for how language can be acquired in the infant's
brain, not just processed by a mature brain.
Related papers
- Simulated Language Acquisition in a Biologically Realistic Model of the Brain [0.8287206589886881]
We introduce a simple mathematical formulation of six basic and broadly accepted principles of neuroscience.<n>We implement a simulated neuromorphic system based on this formalism, which is capable of basic language acquisition.<n>We discuss several possible extensions and implications of this result.
arXiv Detail & Related papers (2025-07-15T23:04:44Z) - Hebbian learning the local structure of language [0.0]
We derive the foundations of an effective human language model inspired by microscopic constraints.
It has two parts: (1) a hierarchy of neurons which learns to tokenize words from text (whichiswhatyoudowhenyoureadthis); and (2) additional neurons which bind the learned symanticless patterns of the tokenizer into a symanticful token.
arXiv Detail & Related papers (2025-03-03T21:15:57Z) - Discovering Hidden Visual Concepts Beyond Linguistic Input in Infant Learning [18.43931715859825]
As computer vision seeks to replicate the human vision system, understanding infant visual development may offer valuable insights.
In this paper, we present an interdisciplinary study exploring this question.
Can a computational model that imitates the infant learning process develop broader visual concepts similar to how infants naturally learn?
Our work bridges cognitive science and computer vision by analyzing the internal representations of a computational model trained on an infant visual and linguistic inputs.
arXiv Detail & Related papers (2025-01-09T12:55:55Z) - Brain-Like Language Processing via a Shallow Untrained Multihead Attention Network [16.317199232071232]
Large Language Models (LLMs) have been shown to be effective models of the human language system.
In this work, we investigate the key architectural components driving the surprising alignment of untrained models.
arXiv Detail & Related papers (2024-06-21T12:54:03Z) - Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - Universal Syntactic Structures: Modeling Syntax for Various Natural
Languages [0.0]
We aim to provide an explanation for how the human brain might connect words for sentence formation.
A novel approach to modeling syntactic representation is introduced, potentially showing the existence of universal syntactic structures for all natural languages.
arXiv Detail & Related papers (2023-12-28T20:44:26Z) - Causal Graph in Language Model Rediscovers Cortical Hierarchy in Human
Narrative Processing [0.0]
Previous studies have demonstrated that the features of language models can be mapped to fMRI brain activity.
This raises the question: is there a commonality between information processing in language models and the human brain?
To estimate information flow patterns in a language model, we examined the causal relationships between different layers.
arXiv Detail & Related papers (2023-11-17T10:09:12Z) - Language Generation from Brain Recordings [68.97414452707103]
We propose a generative language BCI that utilizes the capacity of a large language model and a semantic brain decoder.
The proposed model can generate coherent language sequences aligned with the semantic content of visual or auditory language stimuli.
Our findings demonstrate the potential and feasibility of employing BCIs in direct language generation.
arXiv Detail & Related papers (2023-11-16T13:37:21Z) - Neural Language Models are not Born Equal to Fit Brain Data, but
Training Helps [75.84770193489639]
We examine the impact of test loss, training corpus and model architecture on the prediction of functional Magnetic Resonance Imaging timecourses of participants listening to an audiobook.
We find that untrained versions of each model already explain significant amount of signal in the brain by capturing similarity in brain responses across identical words.
We suggest good practices for future studies aiming at explaining the human language system using neural language models.
arXiv Detail & Related papers (2022-07-07T15:37:17Z) - Model-based analysis of brain activity reveals the hierarchy of language
in 305 subjects [82.81964713263483]
A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli.
Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli.
arXiv Detail & Related papers (2021-10-12T15:30:21Z) - Low-Dimensional Structure in the Space of Language Representations is
Reflected in Brain Responses [62.197912623223964]
We show a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings.
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
This suggests that the embedding captures some part of the brain's natural language representation structure.
arXiv Detail & Related papers (2021-06-09T22:59:12Z) - Does injecting linguistic structure into language models lead to better
alignment with brain recordings? [13.880819301385854]
We evaluate whether language models align better with brain recordings if their attention is biased by annotations from syntactic or semantic formalisms.
Our proposed approach enables the evaluation of more targeted hypotheses about the composition of meaning in the brain.
arXiv Detail & Related papers (2021-01-29T14:42:02Z) - Crossmodal Language Grounding in an Embodied Neurocognitive Model [28.461246169379685]
Human infants are able to acquire natural language seemingly easily at an early age.
From a neuroscientific perspective, natural language is embodied, grounded in most, if not all, sensory and sensorimotor modalities.
We present a neurocognitive model for language grounding which reflects bio-inspired mechanisms.
arXiv Detail & Related papers (2020-06-24T08:12:09Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.