Syntactic Structure Processing in the Brain while Listening
- URL: http://arxiv.org/abs/2302.08589v1
- Date: Thu, 16 Feb 2023 21:28:11 GMT
- Title: Syntactic Structure Processing in the Brain while Listening
- Authors: Subba Reddy Oota, Mounika Marreddy, Manish Gupta and Bapi Raju
Surampud
- Abstract summary: There are two popular syntactic parsing methods: constituency and dependency parsing.
Recent works have used syntactic embeddings based on constituency trees, incremental top-down parsing, and other word syntactic features for brain activity prediction given the text stimuli to study how the syntax structure is represented in the brain's language network.
We investigate the predictive power of the brain encoding models in three settings: (i) individual performance of the constituency and dependency syntactic parsing based embedding methods, (ii) efficacy of these syntactic parsing based embedding methods when controlling for basic syntactic signals, and (iii) relative effectiveness of each of the syntactic embedding methods when controlling for
- Score: 3.735055636181383
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Syntactic parsing is the task of assigning a syntactic structure to a
sentence. There are two popular syntactic parsing methods: constituency and
dependency parsing. Recent works have used syntactic embeddings based on
constituency trees, incremental top-down parsing, and other word syntactic
features for brain activity prediction given the text stimuli to study how the
syntax structure is represented in the brain's language network. However, the
effectiveness of dependency parse trees or the relative predictive power of the
various syntax parsers across brain areas, especially for the listening task,
is yet unexplored. In this study, we investigate the predictive power of the
brain encoding models in three settings: (i) individual performance of the
constituency and dependency syntactic parsing based embedding methods, (ii)
efficacy of these syntactic parsing based embedding methods when controlling
for basic syntactic signals, (iii) relative effectiveness of each of the
syntactic embedding methods when controlling for the other. Further, we explore
the relative importance of syntactic information (from these syntactic
embedding methods) versus semantic information using BERT embeddings. We find
that constituency parsers help explain activations in the temporal lobe and
middle-frontal gyrus, while dependency parsers better encode syntactic
structure in the angular gyrus and posterior cingulate cortex. Although
semantic signals from BERT are more effective compared to any of the syntactic
features or embedding methods, syntactic embedding methods explain additional
variance for a few brain regions.
Related papers
- Multipath parsing in the brain [4.605070569473395]
Humans understand sentences word-by-word, in the order that they hear them.
We investigate how humans process these syntactic ambiguities by correlating predictions from incremental dependencys with timecourse data from people undergoing functional neuroimaging while listening to an audiobook.
In both English and Chinese, we find evidence for multipath parsing. Brain regions associated with this multipath effect include bilateral superior temporal gyrus.
arXiv Detail & Related papers (2024-01-31T18:07:12Z) - Information-Restricted Neural Language Models Reveal Different Brain
Regions' Sensitivity to Semantics, Syntax and Context [87.31930367845125]
We trained a lexical language model, Glove, and a supra-lexical language model, GPT-2, on a text corpus.
We then assessed to what extent these information-restricted models were able to predict the time-courses of fMRI signal of humans listening to naturalistic text.
Our analyses show that, while most brain regions involved in language are sensitive to both syntactic and semantic variables, the relative magnitudes of these effects vary a lot across these regions.
arXiv Detail & Related papers (2023-02-28T08:16:18Z) - Combining Improvements for Exploiting Dependency Trees in Neural
Semantic Parsing [1.0437764544103274]
In this paper, we examine three methods to incorporate such dependency information in a Transformer based semantic parsing system.
We first replace standard self-attention heads in the encoder with parent-scaled self-attention (PASCAL) heads.
Later, we insert the constituent attention (CA) to the encoder, which adds an extra constraint to attention heads that can better capture the inherent dependency structure of input sentences.
arXiv Detail & Related papers (2021-12-25T03:41:42Z) - Decomposing lexical and compositional syntax and semantics with deep
language models [82.81964713263483]
The activations of language transformers like GPT2 have been shown to linearly map onto brain activity during speech comprehension.
Here, we propose a taxonomy to factorize the high-dimensional activations of language models into four classes: lexical, compositional, syntactic, and semantic representations.
The results highlight two findings. First, compositional representations recruit a more widespread cortical network than lexical ones, and encompass the bilateral temporal, parietal and prefrontal cortices.
arXiv Detail & Related papers (2021-03-02T10:24:05Z) - Syntactic representation learning for neural network based TTS with
syntactic parse tree traversal [49.05471750563229]
We propose a syntactic representation learning method based on syntactic parse tree to automatically utilize the syntactic structure information.
Experimental results demonstrate the effectiveness of our proposed approach.
For sentences with multiple syntactic parse trees, prosodic differences can be clearly perceived from the synthesized speeches.
arXiv Detail & Related papers (2020-12-13T05:52:07Z) - Cross-lingual Word Sense Disambiguation using mBERT Embeddings with
Syntactic Dependencies [0.0]
Cross-lingual word sense disambiguation (WSD) tackles the challenge of disambiguating ambiguous words across languages given context.
BERT embedding model has been proven to be effective in contextual information of words.
This project investigates how syntactic information can be added into the BERT embeddings to result in both semantics- and syntax-incorporated word embeddings.
arXiv Detail & Related papers (2020-12-09T20:22:11Z) - Multilingual Irony Detection with Dependency Syntax and Neural Models [61.32653485523036]
It focuses on the contribution from syntactic knowledge, exploiting linguistic resources where syntax is annotated according to the Universal Dependencies scheme.
The results suggest that fine-grained dependency-based syntactic information is informative for the detection of irony.
arXiv Detail & Related papers (2020-11-11T11:22:05Z) - A Comparative Study on Structural and Semantic Properties of Sentence
Embeddings [77.34726150561087]
We propose a set of experiments using a widely-used large-scale data set for relation extraction.
We show that different embedding spaces have different degrees of strength for the structural and semantic properties.
These results provide useful information for developing embedding-based relation extraction methods.
arXiv Detail & Related papers (2020-09-23T15:45:32Z) - Syntactic Structure Distillation Pretraining For Bidirectional Encoders [49.483357228441434]
We introduce a knowledge distillation strategy for injecting syntactic biases into BERT pretraining.
We distill the approximate marginal distribution over words in context from the syntactic LM.
Our findings demonstrate the benefits of syntactic biases, even in representation learners that exploit large amounts of data.
arXiv Detail & Related papers (2020-05-27T16:44:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.