Semantic Relatedness for Keyword Disambiguation: Exploiting Different
Embeddings
- URL: http://arxiv.org/abs/2002.11023v1
- Date: Tue, 25 Feb 2020 16:44:50 GMT
- Title: Semantic Relatedness for Keyword Disambiguation: Exploiting Different
Embeddings
- Authors: Mar\'ia G. Buey and Carlos Bobed and Jorge Gracia and Eduardo Mena
- Abstract summary: We propose an approach to keyword disambiguation which grounds on a semantic relatedness between words and senses provided by an external inventory (ontology) that is not known at training time.
Experimental results show that this approach achieves results comparable with the state of the art when applied for Word Sense Disambiguation (WSD) without training for a particular domain.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding the meaning of words is crucial for many tasks that involve
human-machine interaction. This has been tackled by research in Word Sense
Disambiguation (WSD) in the Natural Language Processing (NLP) field. Recently,
WSD and many other NLP tasks have taken advantage of embeddings-based
representation of words, sentences, and documents. However, when it comes to
WSD, most embeddings models suffer from ambiguity as they do not capture the
different possible meanings of the words. Even when they do, the list of
possible meanings for a word (sense inventory) has to be known in advance at
training time to be included in the embeddings space. Unfortunately, there are
situations in which such a sense inventory is not known in advance (e.g., an
ontology selected at run-time), or it evolves with time and its status diverges
from the one at training time. This hampers the use of embeddings models for
WSD. Furthermore, traditional WSD techniques do not perform well in situations
in which the available linguistic information is very scarce, such as the case
of keyword-based queries. In this paper, we propose an approach to keyword
disambiguation which grounds on a semantic relatedness between words and senses
provided by an external inventory (ontology) that is not known at training
time. Building on previous works, we present a semantic relatedness measure
that uses word embeddings, and explore different disambiguation algorithms to
also exploit both word and sentence representations. Experimental results show
that this approach achieves results comparable with the state of the art when
applied for WSD, without training for a particular domain.
Related papers
- Improving Language Models Meaning Understanding and Consistency by
Learning Conceptual Roles from Dictionary [65.268245109828]
Non-human-like behaviour of contemporary pre-trained language models (PLMs) is a leading cause undermining their trustworthiness.
A striking phenomenon is the generation of inconsistent predictions, which produces contradictory results.
We propose a practical approach that alleviates the inconsistent behaviour issue by improving PLM awareness.
arXiv Detail & Related papers (2023-10-24T06:15:15Z) - Unsupervised Semantic Variation Prediction using the Distribution of
Sibling Embeddings [17.803726860514193]
Detection of semantic variation of words is an important task for various NLP applications.
We argue that mean representations alone cannot accurately capture such semantic variations.
We propose a method that uses the entire cohort of the contextualised embeddings of the target word.
arXiv Detail & Related papers (2023-05-15T13:58:21Z) - Semantic Specialization for Knowledge-based Word Sense Disambiguation [12.573927420408365]
A promising approach for knowledge-based Word Sense Disambiguation (WSD) is to select the sense whose contextualized embeddings are closest to those computed for a target word in a given sentence.
We propose a semantic specialization for WSD where contextualized embeddings are adapted to the WSD task using solely lexical knowledge.
arXiv Detail & Related papers (2023-04-22T07:40:23Z) - Word Sense Induction with Knowledge Distillation from BERT [6.88247391730482]
This paper proposes a method to distill multiple word senses from a pre-trained language model (BERT) by using attention over the senses of a word in a context.
Experiments on the contextual word similarity and sense induction tasks show that this method is superior to or competitive with state-of-the-art multi-sense embeddings.
arXiv Detail & Related papers (2023-04-20T21:05:35Z) - Connect-the-Dots: Bridging Semantics between Words and Definitions via
Aligning Word Sense Inventories [47.03271152494389]
Word Sense Disambiguation aims to automatically identify the exact meaning of one word according to its context.
Existing supervised models struggle to make correct predictions on rare word senses due to limited training data.
We propose a gloss alignment algorithm that can align definition sentences with the same meaning from different sense inventories to collect rich lexical knowledge.
arXiv Detail & Related papers (2021-10-27T00:04:33Z) - Contextualized Semantic Distance between Highly Overlapped Texts [85.1541170468617]
Overlapping frequently occurs in paired texts in natural language processing tasks like text editing and semantic similarity evaluation.
This paper aims to address the issue with a mask-and-predict strategy.
We take the words in the longest common sequence as neighboring words and use masked language modeling (MLM) to predict the distributions on their positions.
Experiments on Semantic Textual Similarity show NDD to be more sensitive to various semantic differences, especially on highly overlapped paired texts.
arXiv Detail & Related papers (2021-10-04T03:59:15Z) - Meta-Learning with Variational Semantic Memory for Word Sense
Disambiguation [56.830395467247016]
We propose a model of semantic memory for WSD in a meta-learning setting.
Our model is based on hierarchical variational inference and incorporates an adaptive memory update rule via a hypernetwork.
We show our model advances the state of the art in few-shot WSD, supports effective learning in extremely data scarce scenarios.
arXiv Detail & Related papers (2021-06-05T20:40:01Z) - EDS-MEMBED: Multi-sense embeddings based on enhanced distributional
semantic structures via a graph walk over word senses [0.0]
We leverage the rich semantic structures in WordNet to enhance the quality of multi-sense embeddings.
We derive new distributional semantic similarity measures for M-SE from prior ones.
We report evaluation results on 11 benchmark datasets involving WSD and Word Similarity tasks.
arXiv Detail & Related papers (2021-02-27T14:36:55Z) - Fake it Till You Make it: Self-Supervised Semantic Shifts for
Monolingual Word Embedding Tasks [58.87961226278285]
We propose a self-supervised approach to model lexical semantic change.
We show that our method can be used for the detection of semantic change with any alignment method.
We illustrate the utility of our techniques using experimental results on three different datasets.
arXiv Detail & Related papers (2021-01-30T18:59:43Z) - Word Sense Disambiguation for 158 Languages using Word Embeddings Only [80.79437083582643]
Disambiguation of word senses in context is easy for humans, but a major challenge for automatic approaches.
We present a method that takes as input a standard pre-trained word embedding model and induces a fully-fledged word sense inventory.
We use this method to induce a collection of sense inventories for 158 languages on the basis of the original pre-trained fastText word embeddings.
arXiv Detail & Related papers (2020-03-14T14:50:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.