A Method for Studying Semantic Construal in Grammatical Constructions
with Interpretable Contextual Embedding Spaces
- URL: http://arxiv.org/abs/2305.18598v1
- Date: Mon, 29 May 2023 20:30:38 GMT
- Title: A Method for Studying Semantic Construal in Grammatical Constructions
with Interpretable Contextual Embedding Spaces
- Authors: Gabriella Chronis, Kyle Mahowald, Katrin Erk
- Abstract summary: We study semantic construal in grammatical constructions using large language models.
We show that a word in subject position is interpreted as more agentive than the very same word in object position.
Our method can probe the distributional meaning of syntactic constructions at a templatic level, abstracted away from specific lexemes.
- Score: 11.564029462243631
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study semantic construal in grammatical constructions using large language
models. First, we project contextual word embeddings into three interpretable
semantic spaces, each defined by a different set of psycholinguistic feature
norms. We validate these interpretable spaces and then use them to
automatically derive semantic characterizations of lexical items in two
grammatical constructions: nouns in subject or object position within the same
sentence, and the AANN construction (e.g., `a beautiful three days'). We show
that a word in subject position is interpreted as more agentive than the very
same word in object position, and that the nouns in the AANN construction are
interpreted as more measurement-like than when in the canonical alternation.
Our method can probe the distributional meaning of syntactic constructions at a
templatic level, abstracted away from specific lexemes.
Related papers
- Evaluating Contextualized Representations of (Spanish) Ambiguous Words: A New Lexical Resource and Empirical Analysis [2.2530496464901106]
We evaluate semantic representations of Spanish ambiguous nouns in context in a suite of Spanish-language monolingual and multilingual BERT-based models.
We find that various BERT-based LMs' contextualized semantic representations capture some variance in human judgments but fall short of the human benchmark.
arXiv Detail & Related papers (2024-06-20T18:58:11Z) - Unsupervised Mapping of Arguments of Deverbal Nouns to Their
Corresponding Verbal Labels [52.940886615390106]
Deverbal nouns are verbs commonly used in written English texts to describe events or actions, as well as their arguments.
The solutions that do exist for handling arguments of nominalized constructions are based on semantic annotation.
We propose to adopt a more syntactic approach, which maps the arguments of deverbal nouns to the corresponding verbal construction.
arXiv Detail & Related papers (2023-06-24T10:07:01Z) - Domain-Specific Word Embeddings with Structure Prediction [3.057136788672694]
We present an empirical evaluation on New York Times articles and two English Wikipedia datasets with articles on science and philosophy.
Our method, called Word2Vec with Structure Prediction (W2VPred), provides better performance than baselines in terms of the general analogy tests.
As a use case in the field of Digital Humanities we demonstrate how to raise novel research questions for high literature from the German Text Archive.
arXiv Detail & Related papers (2022-10-06T12:45:48Z) - A bilingual approach to specialised adjectives through word embeddings
in the karstology domain [3.92181732547846]
We present an experiment in extracting adjectives which express a specific semantic relation using word embeddings.
The results of the experiment are then thoroughly analysed and categorised into groups of adjectives exhibiting formal or semantic similarity.
arXiv Detail & Related papers (2022-03-31T08:27:15Z) - AUTOLEX: An Automatic Framework for Linguistic Exploration [93.89709486642666]
We propose an automatic framework that aims to ease linguists' discovery and extraction of concise descriptions of linguistic phenomena.
Specifically, we apply this framework to extract descriptions for three phenomena: morphological agreement, case marking, and word order.
We evaluate the descriptions with the help of language experts and propose a method for automated evaluation when human evaluation is infeasible.
arXiv Detail & Related papers (2022-03-25T20:37:30Z) - Understanding Synonymous Referring Expressions via Contrastive Features [105.36814858748285]
We develop an end-to-end trainable framework to learn contrastive features on the image and object instance levels.
We conduct extensive experiments to evaluate the proposed algorithm on several benchmark datasets.
arXiv Detail & Related papers (2021-04-20T17:56:24Z) - Fake it Till You Make it: Self-Supervised Semantic Shifts for
Monolingual Word Embedding Tasks [58.87961226278285]
We propose a self-supervised approach to model lexical semantic change.
We show that our method can be used for the detection of semantic change with any alignment method.
We illustrate the utility of our techniques using experimental results on three different datasets.
arXiv Detail & Related papers (2021-01-30T18:59:43Z) - Unsupervised Distillation of Syntactic Information from Contextualized
Word Representations [62.230491683411536]
We tackle the task of unsupervised disentanglement between semantics and structure in neural language representations.
To this end, we automatically generate groups of sentences which are structurally similar but semantically different.
We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics.
arXiv Detail & Related papers (2020-10-11T15:13:18Z) - A Comparative Study on Structural and Semantic Properties of Sentence
Embeddings [77.34726150561087]
We propose a set of experiments using a widely-used large-scale data set for relation extraction.
We show that different embedding spaces have different degrees of strength for the structural and semantic properties.
These results provide useful information for developing embedding-based relation extraction methods.
arXiv Detail & Related papers (2020-09-23T15:45:32Z) - Efficient Sentence Embedding via Semantic Subspace Analysis [33.44637608270928]
We develop a sentence representation scheme by analyzing semantic subspaces of constituent words.
Experimental results show that it offers comparable or better performance than the state-of-the-art.
arXiv Detail & Related papers (2020-02-22T04:12:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.