Unsupervised Semantic Variation Prediction using the Distribution of
Sibling Embeddings
- URL: http://arxiv.org/abs/2305.08654v1
- Date: Mon, 15 May 2023 13:58:21 GMT
- Title: Unsupervised Semantic Variation Prediction using the Distribution of
Sibling Embeddings
- Authors: Taichi Aida, Danushka Bollegala
- Abstract summary: Detection of semantic variation of words is an important task for various NLP applications.
We argue that mean representations alone cannot accurately capture such semantic variations.
We propose a method that uses the entire cohort of the contextualised embeddings of the target word.
- Score: 17.803726860514193
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Languages are dynamic entities, where the meanings associated with words
constantly change with time. Detecting the semantic variation of words is an
important task for various NLP applications that must make time-sensitive
predictions. Existing work on semantic variation prediction have predominantly
focused on comparing some form of an averaged contextualised representation of
a target word computed from a given corpus. However, some of the previously
associated meanings of a target word can become obsolete over time (e.g.
meaning of gay as happy), while novel usages of existing words are observed
(e.g. meaning of cell as a mobile phone). We argue that mean representations
alone cannot accurately capture such semantic variations and propose a method
that uses the entire cohort of the contextualised embeddings of the target
word, which we refer to as the sibling distribution. Experimental results on
SemEval-2020 Task 1 benchmark dataset for semantic variation prediction show
that our method outperforms prior work that consider only the mean embeddings,
and is comparable to the current state-of-the-art. Moreover, a qualitative
analysis shows that our method detects important semantic changes in words that
are not captured by the existing methods. Source code is available at
https://github.com/a1da4/svp-gauss .
Related papers
- Can Word Sense Distribution Detect Semantic Changes of Words? [35.17635565325166]
We show that word sense distributions can be accurately used to predict semantic changes of words in English, German, Swedish and Latin.
Our experimental results on SemEval 2020 Task 1 dataset show that word sense distributions can be accurately used to predict semantic changes of words.
arXiv Detail & Related papers (2023-10-16T13:41:27Z) - Connect-the-Dots: Bridging Semantics between Words and Definitions via
Aligning Word Sense Inventories [47.03271152494389]
Word Sense Disambiguation aims to automatically identify the exact meaning of one word according to its context.
Existing supervised models struggle to make correct predictions on rare word senses due to limited training data.
We propose a gloss alignment algorithm that can align definition sentences with the same meaning from different sense inventories to collect rich lexical knowledge.
arXiv Detail & Related papers (2021-10-27T00:04:33Z) - Contextualized Semantic Distance between Highly Overlapped Texts [85.1541170468617]
Overlapping frequently occurs in paired texts in natural language processing tasks like text editing and semantic similarity evaluation.
This paper aims to address the issue with a mask-and-predict strategy.
We take the words in the longest common sequence as neighboring words and use masked language modeling (MLM) to predict the distributions on their positions.
Experiments on Semantic Textual Similarity show NDD to be more sensitive to various semantic differences, especially on highly overlapped paired texts.
arXiv Detail & Related papers (2021-10-04T03:59:15Z) - Grammatical Profiling for Semantic Change Detection [6.3596637237946725]
We use grammatical profiling as an alternative method for semantic change detection.
We demonstrate that it can be used for semantic change detection and even outperforms some distributional semantic methods.
arXiv Detail & Related papers (2021-09-21T18:38:18Z) - EDS-MEMBED: Multi-sense embeddings based on enhanced distributional
semantic structures via a graph walk over word senses [0.0]
We leverage the rich semantic structures in WordNet to enhance the quality of multi-sense embeddings.
We derive new distributional semantic similarity measures for M-SE from prior ones.
We report evaluation results on 11 benchmark datasets involving WSD and Word Similarity tasks.
arXiv Detail & Related papers (2021-02-27T14:36:55Z) - Fake it Till You Make it: Self-Supervised Semantic Shifts for
Monolingual Word Embedding Tasks [58.87961226278285]
We propose a self-supervised approach to model lexical semantic change.
We show that our method can be used for the detection of semantic change with any alignment method.
We illustrate the utility of our techniques using experimental results on three different datasets.
arXiv Detail & Related papers (2021-01-30T18:59:43Z) - MASKER: Masked Keyword Regularization for Reliable Text Classification [73.90326322794803]
We propose a fine-tuning method, coined masked keyword regularization (MASKER), that facilitates context-based prediction.
MASKER regularizes the model to reconstruct the keywords from the rest of the words and make low-confidence predictions without enough context.
We demonstrate that MASKER improves OOD detection and cross-domain generalization without degrading classification accuracy.
arXiv Detail & Related papers (2020-12-17T04:54:16Z) - SChME at SemEval-2020 Task 1: A Model Ensemble for Detecting Lexical
Semantic Change [58.87961226278285]
This paper describes SChME, a method used in SemEval-2020 Task 1 on unsupervised detection of lexical semantic change.
SChME usesa model ensemble combining signals of distributional models (word embeddings) and wordfrequency models where each model casts a vote indicating the probability that a word sufferedsemantic change according to that feature.
arXiv Detail & Related papers (2020-12-02T23:56:34Z) - Semantic Relatedness for Keyword Disambiguation: Exploiting Different
Embeddings [0.0]
We propose an approach to keyword disambiguation which grounds on a semantic relatedness between words and senses provided by an external inventory (ontology) that is not known at training time.
Experimental results show that this approach achieves results comparable with the state of the art when applied for Word Sense Disambiguation (WSD) without training for a particular domain.
arXiv Detail & Related papers (2020-02-25T16:44:50Z) - Lexical Sememe Prediction using Dictionary Definitions by Capturing
Local Semantic Correspondence [94.79912471702782]
Sememes, defined as the minimum semantic units of human languages, have been proven useful in many NLP tasks.
We propose a Sememe Correspondence Pooling (SCorP) model, which is able to capture this kind of matching to predict sememes.
We evaluate our model and baseline methods on a famous sememe KB HowNet and find that our model achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-01-16T17:30:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.