A Compositional Typed Semantics for Universal Dependencies
- URL: http://arxiv.org/abs/2403.01187v1
- Date: Sat, 2 Mar 2024 11:58:24 GMT
- Title: A Compositional Typed Semantics for Universal Dependencies
- Authors: Laurestine Bradford, Timothy John O'Donnell, Siva Reddy
- Abstract summary: We introduce UD Type Calculus, a compositional, principled, and language-independent system of semantic types and logical forms for lexical items.
We explain the essential features of UD Type Calculus, which all involve giving dependency relations denotations just like those of words.
We present results on a large existing corpus of sentences and their logical forms, showing that UD-TC can produce meanings comparable with our baseline.
- Score: 26.65442947858347
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Languages may encode similar meanings using different sentence structures.
This makes it a challenge to provide a single set of formal rules that can
derive meanings from sentences in many languages at once. To overcome the
challenge, we can take advantage of language-general connections between
meaning and syntax, and build on cross-linguistically parallel syntactic
structures. We introduce UD Type Calculus, a compositional, principled, and
language-independent system of semantic types and logical forms for lexical
items which builds on a widely-used language-general dependency syntax
framework. We explain the essential features of UD Type Calculus, which all
involve giving dependency relations denotations just like those of words. These
allow UD-TC to derive correct meanings for sentences with a wide range of
syntactic structures by making use of dependency labels. Finally, we present
evaluation results on a large existing corpus of sentences and their logical
forms, showing that UD-TC can produce meanings comparable with our baseline.
Related papers
- UCxn: Typologically Informed Annotation of Constructions Atop Universal Dependencies [40.202120178465]
Grammatical constructions that convey meaning through a particular combination of several morphosyntactic elements are not labeled holistically.
We argue for augmenting UD annotations with a 'UCxn' annotation layer for such meaning-bearing grammatical constructions.
As a case study, we consider five construction families in ten languages, identifying instances of each construction in UD treebanks through the use of morphosyntactic patterns.
arXiv Detail & Related papers (2024-03-26T14:40:10Z) - Dynamic Syntax Mapping: A New Approach to Unsupervised Syntax Parsing [0.0]
This study investigates the premise that language models, specifically their attention distributions, can encapsulate syntactic dependencies.
We introduce Dynamic Syntax Mapping (DSM), an innovative approach for the induction of these structures.
Our findings reveal that the use of an increasing array of substitutions notably enhances parsing precision on natural language data.
arXiv Detail & Related papers (2023-12-18T10:34:29Z) - Unsupervised Mapping of Arguments of Deverbal Nouns to Their
Corresponding Verbal Labels [52.940886615390106]
Deverbal nouns are verbs commonly used in written English texts to describe events or actions, as well as their arguments.
The solutions that do exist for handling arguments of nominalized constructions are based on semantic annotation.
We propose to adopt a more syntactic approach, which maps the arguments of deverbal nouns to the corresponding verbal construction.
arXiv Detail & Related papers (2023-06-24T10:07:01Z) - Quantifying syntax similarity with a polynomial representation of
dependency trees [4.1542266070946745]
We introduce a graph that distinguishes tree structures to represent dependency grammar.
The encodes accurate and comprehensive information about the dependency structure and dependency relations of words in a sentence.
arXiv Detail & Related papers (2022-11-13T19:55:08Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - Multilingual Word Sense Disambiguation with Unified Sense Representation [55.3061179361177]
We propose building knowledge and supervised-based Multilingual Word Sense Disambiguation (MWSD) systems.
We build unified sense representations for multiple languages and address the annotation scarcity problem for MWSD by transferring annotations from rich-sourced languages to poorer ones.
Evaluations of SemEval-13 and SemEval-15 datasets demonstrate the effectiveness of our methodology.
arXiv Detail & Related papers (2022-10-14T01:24:03Z) - Cross-linguistically Consistent Semantic and Syntactic Annotation of Child-directed Speech [27.657676278734534]
This paper proposes a methodology for constructing such corpora of child directed speech paired with sentential logical forms.
The approach enforces a cross-linguistically consistent representation, building on recent advances in dependency representation and semantic parsing.
arXiv Detail & Related papers (2021-09-22T18:17:06Z) - Provable Limitations of Acquiring Meaning from Ungrounded Form: What
will Future Language Models Understand? [87.20342701232869]
We investigate the abilities of ungrounded systems to acquire meaning.
We study whether assertions enable a system to emulate representations preserving semantic relations like equivalence.
We find that assertions enable semantic emulation if all expressions in the language are referentially transparent.
However, if the language uses non-transparent patterns like variable binding, we show that emulation can become an uncomputable problem.
arXiv Detail & Related papers (2021-04-22T01:00:17Z) - GATE: Graph Attention Transformer Encoder for Cross-lingual Relation and
Event Extraction [107.8262586956778]
We introduce graph convolutional networks (GCNs) with universal dependency parses to learn language-agnostic sentence representations.
GCNs struggle to model words with long-range dependencies or are not directly connected in the dependency tree.
We propose to utilize the self-attention mechanism to learn the dependencies between words with different syntactic distances.
arXiv Detail & Related papers (2020-10-06T20:30:35Z) - Universal Dependencies v2: An Evergrowing Multilingual Treebank
Collection [33.86322085911299]
Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages.
We describe version 2 of the guidelines (UD v2), discuss the major changes from UD v1 to UD v2, and give an overview of the currently available treebanks for 90 languages.
arXiv Detail & Related papers (2020-04-22T15:38:18Z) - A Benchmark for Systematic Generalization in Grounded Language
Understanding [61.432407738682635]
Humans easily interpret expressions that describe unfamiliar situations composed from familiar parts.
Modern neural networks, by contrast, struggle to interpret novel compositions.
We introduce a new benchmark, gSCAN, for evaluating compositional generalization in situated language understanding.
arXiv Detail & Related papers (2020-03-11T08:40:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.