Using Distributional Thesaurus Embedding for Co-hyponymy Detection
- URL: http://arxiv.org/abs/2002.11506v1
- Date: Mon, 24 Feb 2020 20:11:35 GMT
- Title: Using Distributional Thesaurus Embedding for Co-hyponymy Detection
- Authors: Abhik Jana, Nikhil Reddy Varimalla and Pawan Goyal
- Abstract summary: We investigate whether the network embedding of distributional thesaurus can be effectively utilized to detect co-hyponymy relations.
We show that the vector representation obtained by applying node2vec on distributional thesaurus outperforms the state-of-the-art models for binary classification of co-hyponymy vs. hypernymy.
- Score: 11.165092545013799
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discriminating lexical relations among distributionally similar words has
always been a challenge for natural language processing (NLP) community. In
this paper, we investigate whether the network embedding of distributional
thesaurus can be effectively utilized to detect co-hyponymy relations. By
extensive experiments over three benchmark datasets, we show that the vector
representation obtained by applying node2vec on distributional thesaurus
outperforms the state-of-the-art models for binary classification of
co-hyponymy vs. hypernymy, as well as co-hyponymy vs. meronymy, by huge
margins.
Related papers
- DefSent+: Improving sentence embeddings of language models by projecting definition sentences into a quasi-isotropic or isotropic vector space of unlimited dictionary entries [5.317095505067784]
This paper presents a significant improvement on the previous conference paper known as DefSent.
We propose a novel method to progressively build entry embeddings not subject to the limitations.
As a result, definition sentences can be projected into a quasi-isotropic or isotropic vector space of unlimited dictionary entries.
arXiv Detail & Related papers (2024-05-25T09:43:38Z) - Prototype-based Embedding Network for Scene Graph Generation [105.97836135784794]
Current Scene Graph Generation (SGG) methods explore contextual information to predict relationships among entity pairs.
Due to the diverse visual appearance of numerous possible subject-object combinations, there is a large intra-class variation within each predicate category.
Prototype-based Embedding Network (PE-Net) models entities/predicates with prototype-aligned compact and distinctive representations.
PL is introduced to help PE-Net efficiently learn such entitypredicate matching, and Prototype Regularization (PR) is devised to relieve the ambiguous entity-predicate matching.
arXiv Detail & Related papers (2023-03-13T13:30:59Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - Lexical semantics enhanced neural word embeddings [4.040491121427623]
hierarchy-fitting is a novel approach to modelling semantic similarity nuances inherently stored in the IS-A hierarchies.
Results demonstrate the efficacy of hierarchy-fitting in specialising neural embeddings with semantic relations in late fusion.
arXiv Detail & Related papers (2022-10-03T08:10:23Z) - Synonym Detection Using Syntactic Dependency And Neural Embeddings [3.0770051635103974]
We study the role of syntactic dependencies in deriving distributional semantics using the Vector Space Model.
We study the effectiveness of injecting human-compiled semantic knowledge into neural embeddings on computing distributional similarity.
Our results show that the syntactically conditioned contexts can interpret lexical semantics better than the unconditioned ones.
arXiv Detail & Related papers (2022-09-30T03:16:41Z) - Contextualized Semantic Distance between Highly Overlapped Texts [85.1541170468617]
Overlapping frequently occurs in paired texts in natural language processing tasks like text editing and semantic similarity evaluation.
This paper aims to address the issue with a mask-and-predict strategy.
We take the words in the longest common sequence as neighboring words and use masked language modeling (MLM) to predict the distributions on their positions.
Experiments on Semantic Textual Similarity show NDD to be more sensitive to various semantic differences, especially on highly overlapped paired texts.
arXiv Detail & Related papers (2021-10-04T03:59:15Z) - A Correspondence Variational Autoencoder for Unsupervised Acoustic Word
Embeddings [50.524054820564395]
We propose a new unsupervised model for mapping a variable-duration speech segment to a fixed-dimensional representation.
The resulting acoustic word embeddings can form the basis of search, discovery, and indexing systems for low- and zero-resource languages.
arXiv Detail & Related papers (2020-12-03T19:24:42Z) - Data Augmentation for Hypernymy Detection [4.616703548353372]
We develop two novel data augmentation techniques to generate new training examples from existing ones.
First, we combine the linguistic principles of hypernym transitivity and intersective modifier-noun composition to generate additional pairs of vectors.
Second, we use generative adversarial networks (GANs) to generate pairs of vectors for which the hypernymy relation can be assumed.
arXiv Detail & Related papers (2020-05-04T21:32:12Z) - Dense Embeddings Preserving the Semantic Relationships in WordNet [2.9443230571766854]
We provide a novel way to generate low dimensional vector embeddings for noun and verb synsets in WordNet.
We call this embedding the Sense Spectrum (and Sense Spectra for embeddings)
In order to create suitable labels for the training of sense spectra, we designed a new similarity measurement for noun and verb synsets in WordNet.
arXiv Detail & Related papers (2020-04-22T21:09:47Z) - On the Replicability of Combining Word Embeddings and Retrieval Models [71.18271398274513]
We replicate recent experiments attempting to demonstrate an attractive hypothesis about the use of the Fisher kernel framework.
Specifically, the hypothesis was that the use of a mixture model of von Mises-Fisher (VMF) distributions would be beneficial because of the focus on cosine distances of both VMF and the vector space model.
arXiv Detail & Related papers (2020-01-13T19:01:07Z) - Multiplex Word Embeddings for Selectional Preference Acquisition [70.33531759861111]
We propose a multiplex word embedding model, which can be easily extended according to various relations among words.
Our model can effectively distinguish words with respect to different relations without introducing unnecessary sparseness.
arXiv Detail & Related papers (2020-01-09T04:47:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.