Hinting Semantic Parsing with Statistical Word Sense Disambiguation
- URL: http://arxiv.org/abs/2006.15942v2
- Date: Mon, 6 Jul 2020 11:53:15 GMT
- Title: Hinting Semantic Parsing with Statistical Word Sense Disambiguation
- Authors: Ritwik Bose, Siddharth Vashishtha and James Allen
- Abstract summary: We provide hints from a statistical WSD system to guide a logical semantic type assignments while maintaining the resulting logical forms.
We observe an improvement of up to 10.5% in F-score, however we find that this improvement comes at a cost to the structural integrity of the parse.
- Score: 6.4182601340261956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of Semantic Parsing can be approximated as a transformation of an
utterance into a logical form graph where edges represent semantic roles and
nodes represent word senses. The resulting representation should be capture the
meaning of the utterance and be suitable for reasoning. Word senses and
semantic roles are interdependent, meaning errors in assigning word senses can
cause errors in assigning semantic roles and vice versa. While statistical
approaches to word sense disambiguation outperform logical, rule-based semantic
parsers for raw word sense assignment, these statistical word sense
disambiguation systems do not produce the rich role structure or detailed
semantic representation of the input. In this work, we provide hints from a
statistical WSD system to guide a logical semantic parser to produce better
semantic type assignments while maintaining the soundness of the resulting
logical forms. We observe an improvement of up to 10.5% in F-score, however we
find that this improvement comes at a cost to the structural integrity of the
parse
Related papers
- Unsupervised Mapping of Arguments of Deverbal Nouns to Their
Corresponding Verbal Labels [52.940886615390106]
Deverbal nouns are verbs commonly used in written English texts to describe events or actions, as well as their arguments.
The solutions that do exist for handling arguments of nominalized constructions are based on semantic annotation.
We propose to adopt a more syntactic approach, which maps the arguments of deverbal nouns to the corresponding verbal construction.
arXiv Detail & Related papers (2023-06-24T10:07:01Z) - Interpretable Word Sense Representations via Definition Generation: The
Case of Semantic Change Analysis [3.515619810213763]
We propose using automatically generated natural language definitions of contextualised word usages as interpretable word and word sense representations.
We demonstrate how the resulting sense labels can make existing approaches to semantic change analysis more interpretable.
arXiv Detail & Related papers (2023-05-19T20:36:21Z) - Semantic-aware Contrastive Learning for More Accurate Semantic Parsing [32.74456368167872]
We propose a semantic-aware contrastive learning algorithm, which can learn to distinguish fine-grained meaning representations.
Experiments on two standard datasets show that our approach achieves significant improvements over MLE baselines.
arXiv Detail & Related papers (2023-01-19T07:04:32Z) - Lost in Context? On the Sense-wise Variance of Contextualized Word
Embeddings [11.475144702935568]
We quantify how much the contextualized embeddings of each word sense vary across contexts in typical pre-trained models.
We find that word representations are position-biased, where the first words in different contexts tend to be more similar.
arXiv Detail & Related papers (2022-08-20T12:27:25Z) - Graph Adaptive Semantic Transfer for Cross-domain Sentiment
Classification [68.06496970320595]
Cross-domain sentiment classification (CDSC) aims to use the transferable semantics learned from the source domain to predict the sentiment of reviews in the unlabeled target domain.
We present Graph Adaptive Semantic Transfer (GAST) model, an adaptive syntactic graph embedding method that is able to learn domain-invariant semantics from both word sequences and syntactic graphs.
arXiv Detail & Related papers (2022-05-18T07:47:01Z) - Large Scale Substitution-based Word Sense Induction [48.49573297876054]
We present a word-sense induction method based on pre-trained masked language models (MLMs), which can cheaply scale to large vocabularies and large corpora.
The result is a corpus which is sense-tagged according to a corpus-derived sense inventory and where each sense is associated with indicative words.
Evaluation on English Wikipedia that was sense-tagged using our method shows that both the induced senses, and the per-instance sense assignment, are of high quality even compared to WSD methods, such as Babelfy.
arXiv Detail & Related papers (2021-10-14T19:40:37Z) - Contextualized Semantic Distance between Highly Overlapped Texts [85.1541170468617]
Overlapping frequently occurs in paired texts in natural language processing tasks like text editing and semantic similarity evaluation.
This paper aims to address the issue with a mask-and-predict strategy.
We take the words in the longest common sequence as neighboring words and use masked language modeling (MLM) to predict the distributions on their positions.
Experiments on Semantic Textual Similarity show NDD to be more sensitive to various semantic differences, especially on highly overlapped paired texts.
arXiv Detail & Related papers (2021-10-04T03:59:15Z) - An MRC Framework for Semantic Role Labeling [21.140775452024894]
We propose to use the machine reading comprehension framework to bridge the gap between predicate disambiguation and argument labeling.
We leverage both the predicate semantics and the semantic role semantics for argument labeling.
Experiments show that the proposed framework achieves state-of-the-art results on both span and dependency benchmarks.
arXiv Detail & Related papers (2021-09-14T13:04:08Z) - EDS-MEMBED: Multi-sense embeddings based on enhanced distributional
semantic structures via a graph walk over word senses [0.0]
We leverage the rich semantic structures in WordNet to enhance the quality of multi-sense embeddings.
We derive new distributional semantic similarity measures for M-SE from prior ones.
We report evaluation results on 11 benchmark datasets involving WSD and Word Similarity tasks.
arXiv Detail & Related papers (2021-02-27T14:36:55Z) - Unsupervised Distillation of Syntactic Information from Contextualized
Word Representations [62.230491683411536]
We tackle the task of unsupervised disentanglement between semantics and structure in neural language representations.
To this end, we automatically generate groups of sentences which are structurally similar but semantically different.
We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics.
arXiv Detail & Related papers (2020-10-11T15:13:18Z) - Unsupervised Transfer of Semantic Role Models from Verbal to Nominal
Domain [65.04669567781634]
We investigate a transfer scenario where we assume role-annotated data for the source verbal domain but only unlabeled data for the target nominal domain.
Our key assumption, enabling the transfer between the two domains, is that selectional preferences of a role do not strongly depend on whether the relation is triggered by a verb or a noun.
The method substantially outperforms baselines, such as unsupervised and direct transfer' methods, on the English CoNLL-2009 dataset.
arXiv Detail & Related papers (2020-05-01T09:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.