Brain-inspired probabilistic generative model for double articulation
analysis of spoken language
- URL: http://arxiv.org/abs/2207.02457v1
- Date: Wed, 6 Jul 2022 06:03:10 GMT
- Title: Brain-inspired probabilistic generative model for double articulation
analysis of spoken language
- Authors: Akira Taniguchi, Maoko Muro, Hiroshi Yamakawa, Tadahiro Taniguchi
- Abstract summary: The human brain analyzes the double articulation structure in spoken language.
Where and how DAA is performed in the human brain has not been established.
This study proposes a PGM for a DAA hypothesis that can be realized in the brain based on the outcomes of several neuroscientific surveys.
- Score: 7.0349768355860895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The human brain, among its several functions, analyzes the double
articulation structure in spoken language, i.e., double articulation analysis
(DAA). A hierarchical structure in which words are connected to form a sentence
and words are composed of phonemes or syllables is called a double articulation
structure. Where and how DAA is performed in the human brain has not been
established, although some insights have been obtained. In addition, existing
computational models based on a probabilistic generative model (PGM) do not
incorporate neuroscientific findings, and their consistency with the brain has
not been previously discussed. This study compared, mapped, and integrated
these existing computational models with neuroscientific findings to bridge
this gap, and the findings are relevant for future applications and further
research. This study proposes a PGM for a DAA hypothesis that can be realized
in the brain based on the outcomes of several neuroscientific surveys. The
study involved (i) investigation and organization of anatomical structures
related to spoken language processing, and (ii) design of a PGM that matches
the anatomy and functions of the region of interest. Therefore, this study
provides novel insights that will be foundational to further exploring DAA in
the brain.
Related papers
- Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - On the Shape of Brainscores for Large Language Models (LLMs) [0.0]
"Brainscore" emerged as a means to evaluate the functional similarity between Large Language Models (LLMs) and human brain/neural systems.
Our efforts were dedicated to mining the meaning of the novel score by constructing topological features derived from both human fMRI data.
We trained 36 Linear Regression Models and conducted thorough statistical analyses to discern reliable and valid features from our constructed ones.
arXiv Detail & Related papers (2024-05-10T13:22:20Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Neural Language Models are not Born Equal to Fit Brain Data, but
Training Helps [75.84770193489639]
We examine the impact of test loss, training corpus and model architecture on the prediction of functional Magnetic Resonance Imaging timecourses of participants listening to an audiobook.
We find that untrained versions of each model already explain significant amount of signal in the brain by capturing similarity in brain responses across identical words.
We suggest good practices for future studies aiming at explaining the human language system using neural language models.
arXiv Detail & Related papers (2022-07-07T15:37:17Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - Connecting Neural Response measurements & Computational Models of
language: a non-comprehensive guide [5.523143941738335]
Recent advances in language modelling and in neuroimaging promise potential improvements in the investigation of language's neurobiology.
This survey traces a line from early research linking Event Related Potentials and complexity measures derived from simple language models to contemporary studies employing Artificial Neural Network models trained on large corpora.
arXiv Detail & Related papers (2022-03-10T11:24:54Z) - Model-based analysis of brain activity reveals the hierarchy of language
in 305 subjects [82.81964713263483]
A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli.
Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli.
arXiv Detail & Related papers (2021-10-12T15:30:21Z) - Does injecting linguistic structure into language models lead to better
alignment with brain recordings? [13.880819301385854]
We evaluate whether language models align better with brain recordings if their attention is biased by annotations from syntactic or semantic formalisms.
Our proposed approach enables the evaluation of more targeted hypotheses about the composition of meaning in the brain.
arXiv Detail & Related papers (2021-01-29T14:42:02Z) - Emergence of Separable Manifolds in Deep Language Representations [26.002842878797765]
Deep neural networks (DNNs) have shown much empirical success in solving perceptual tasks across various cognitive modalities.
Recent studies report considerable similarities between representations extracted from task-optimized DNNs and neural populations in the brain.
DNNs have subsequently become a popular model class to infer computational principles underlying complex cognitive functions.
arXiv Detail & Related papers (2020-06-01T17:23:44Z) - DeepRetinotopy: Predicting the Functional Organization of Human Visual
Cortex from Structural MRI Data using Geometric Deep Learning [125.99533416395765]
We developed a deep learning model capable of exploiting the structure of the cortex to learn the complex relationship between brain function and anatomy from structural and functional MRI data.
Our model was able to predict the functional organization of human visual cortex from anatomical properties alone, and it was also able to predict nuanced variations across individuals.
arXiv Detail & Related papers (2020-05-26T04:54:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.