Distributed neural encoding of binding to thematic roles
- URL: http://arxiv.org/abs/2110.12342v1
- Date: Sun, 24 Oct 2021 03:26:30 GMT
- Title: Distributed neural encoding of binding to thematic roles
- Authors: Matthias Lalisse, Paul Smolensky
- Abstract summary: A framework and method are proposed for the study of constituent composition in fMRI.
The method produces estimates of neural patterns encoding complex linguistic structures.
- Score: 7.698389510704214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A framework and method are proposed for the study of constituent composition
in fMRI. The method produces estimates of neural patterns encoding complex
linguistic structures, under the assumption that the contributions of
individual constituents are additive. Like usual techniques for modeling
compositional structure in fMRI, the proposed method employs pattern
superposition to synthesize complex structures from their parts. Unlike these
techniques, superpositions are sensitive to the structural positions of
constituents, making them irreducible to structure-indiscriminate
("bag-of-words") models of composition. Reanalyzing data from a study by
Frankland and Greene (2015), it is shown that comparison of neural predictive
models with differing specifications can illuminate aspects of neural
representational contents that are not apparent when composition is not
modelled. The results indicate that the neural instantiations of the binding of
fillers to thematic roles in a sentence are non-orthogonal, and therefore
spatially overlapping.
Related papers
- Compositional Structures in Neural Embedding and Interaction Decompositions [101.40245125955306]
We describe a basic correspondence between linear algebraic structures within vector embeddings in artificial neural networks.
We introduce a characterization of compositional structures in terms of "interaction decompositions"
We establish necessary and sufficient conditions for the presence of such structures within the representations of a model.
arXiv Detail & Related papers (2024-07-12T02:39:50Z) - What makes Models Compositional? A Theoretical View: With Supplement [60.284698521569936]
We propose a general neuro-symbolic definition of compositional functions and their compositional complexity.
We show how various existing general and special purpose sequence processing models fit this definition and use it to analyze their compositional complexity.
arXiv Detail & Related papers (2024-05-02T20:10:27Z) - Bayesian Intrinsic Groupwise Image Registration: Unsupervised
Disentanglement of Anatomy and Geometry [53.645443644821306]
This article presents a general Bayesian learning framework for groupwise registration on medical images.
We propose a novel hierarchical variational auto-encoding architecture to realize the inference procedure of the latent variables.
Experiments were conducted to validate the proposed framework, including four datasets from cardiac, brain and abdominal medical images.
arXiv Detail & Related papers (2024-01-04T08:46:39Z) - Recursive Neural Networks with Bottlenecks Diagnose
(Non-)Compositionality [65.60002535580298]
Quantifying compositionality of data is a challenging task, which has been investigated primarily for short utterances.
We show that comparing data's representations in models with and without a bottleneck can be used to produce a compositionality metric.
The procedure is applied to the evaluation of arithmetic expressions using synthetic data, and sentiment classification using natural language data.
arXiv Detail & Related papers (2023-01-31T15:46:39Z) - Learning Disentangled Representations for Natural Language Definitions [0.0]
We argue that recurrent syntactic and semantic regularities in textual data can be used to provide the models with both structural biases and generative factors.
We leverage the semantic structures present in a representative and semantically dense category of sentence types, definitional sentences, for training a Variational Autoencoder to learn disentangled representations.
arXiv Detail & Related papers (2022-09-22T14:31:55Z) - Reproducing Kernels and New Approaches in Compositional Data Analysis [0.0]
Analyzing compositional data such as human gut microbiomes needs a careful treatment of the geometry of the data.
In this work, based on the key observation that a compositional data are projective in nature, we re-interpret the compositional domain as the quotient topology of a sphere out by a group action.
This construction of RKHS for compositional data will widely open research avenues for future methodology developments.
arXiv Detail & Related papers (2022-05-02T18:46:23Z) - Probing for Constituency Structure in Neural Language Models [11.359403179089817]
We focus on constituent structure as represented in the Penn Treebank (PTB)
We find that 4 pretrained transfomer LMs obtain high performance on our probing tasks.
We show that a complete constituency tree can be linearly separated from LM representations.
arXiv Detail & Related papers (2022-04-13T07:07:37Z) - A deep learning driven pseudospectral PCE based FFT homogenization
algorithm for complex microstructures [68.8204255655161]
It is shown that the proposed method is able to predict central moments of interest while being magnitudes faster to evaluate than traditional approaches.
It is shown, that the proposed method is able to predict central moments of interest while being magnitudes faster to evaluate than traditional approaches.
arXiv Detail & Related papers (2021-10-26T07:02:14Z) - Causal Abstractions of Neural Networks [9.291492712301569]
We propose a new structural analysis method grounded in a formal theory of textitcausal abstraction.
We apply this method to analyze neural models trained on Multiply Quantified Natural Language Inference (MQNLI) corpus.
arXiv Detail & Related papers (2021-06-06T01:07:43Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.