Exploring the Combination of Contextual Word Embeddings and Knowledge
Graph Embeddings
- URL: http://arxiv.org/abs/2004.08371v1
- Date: Fri, 17 Apr 2020 17:49:45 GMT
- Title: Exploring the Combination of Contextual Word Embeddings and Knowledge
Graph Embeddings
- Authors: Lea Dieudonat, Kelvin Han, Phyllicia Leavitt, Esteban Marquer
- Abstract summary: Embeddings of knowledge bases (KB) capture the explicit relations between entities denoted by words, but are not able to directly capture the syntagmatic properties of these words.
We propose a new approach using contextual and KB embeddings jointly at the same level.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: ``Classical'' word embeddings, such as Word2Vec, have been shown to capture
the semantics of words based on their distributional properties. However, their
ability to represent the different meanings that a word may have is limited.
Such approaches also do not explicitly encode relations between entities, as
denoted by words. Embeddings of knowledge bases (KB) capture the explicit
relations between entities denoted by words, but are not able to directly
capture the syntagmatic properties of these words. To our knowledge, recent
research have focused on representation learning that augment the strengths of
one with the other. In this work, we begin exploring another approach using
contextual and KB embeddings jointly at the same level and propose two tasks --
an entity typing and a relation typing task -- that evaluate the performance of
contextual and KB embeddings. We also evaluated a concatenated model of
contextual and KB embeddings with these two tasks, and obtain conclusive
results on the first task. We hope our work may contribute as a basis for
models and datasets that develop in the direction of this approach.
Related papers
- How Well Do Text Embedding Models Understand Syntax? [50.440590035493074]
The ability of text embedding models to generalize across a wide range of syntactic contexts remains under-explored.
Our findings reveal that existing text embedding models have not sufficiently addressed these syntactic understanding challenges.
We propose strategies to augment the generalization ability of text embedding models in diverse syntactic scenarios.
arXiv Detail & Related papers (2023-11-14T08:51:00Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - Multi-grained Label Refinement Network with Dependency Structures for
Joint Intent Detection and Slot Filling [13.963083174197164]
intent and semantic components of a utterance are dependent on the syntactic elements of a sentence.
In this paper, we investigate a multi-grained label refinement network, which utilizes dependency structures and label semantic embeddings.
Considering to enhance syntactic representations, we introduce the dependency structures of sentences into our model by graph attention layer.
arXiv Detail & Related papers (2022-09-09T07:27:38Z) - Keywords and Instances: A Hierarchical Contrastive Learning Framework
Unifying Hybrid Granularities for Text Generation [59.01297461453444]
We propose a hierarchical contrastive learning mechanism, which can unify hybrid granularities semantic meaning in the input text.
Experiments demonstrate that our model outperforms competitive baselines on paraphrasing, dialogue generation, and storytelling tasks.
arXiv Detail & Related papers (2022-05-26T13:26:03Z) - Towards a Theoretical Understanding of Word and Relation Representation [8.020742121274418]
Representing words by vectors, or embeddings, enables computational reasoning.
We focus on word embeddings learned from text corpora and knowledge graphs.
arXiv Detail & Related papers (2022-02-01T15:34:58Z) - Task-Specific Dependency-based Word Embedding Methods [32.75244210656976]
Two task-specific dependency-based word embedding methods are proposed for text classification.
The first one, called the dependency-based word embedding (DWE), chooses keywords and neighbor words of a target word in the dependency parse tree as contexts to build the word-context matrix.
The second method, named class-enhanced dependency-based word embedding (CEDWE), learns from word-context as well as word-class co-occurrence statistics.
arXiv Detail & Related papers (2021-10-26T03:09:41Z) - ERICA: Improving Entity and Relation Understanding for Pre-trained
Language Models via Contrastive Learning [97.10875695679499]
We propose a novel contrastive learning framework named ERICA in pre-training phase to obtain a deeper understanding of the entities and their relations in text.
Experimental results demonstrate that our proposed ERICA framework achieves consistent improvements on several document-level language understanding tasks.
arXiv Detail & Related papers (2020-12-30T03:35:22Z) - Model Choices Influence Attributive Word Associations: A Semi-supervised
Analysis of Static Word Embeddings [0.0]
This work aims to assess attributive word associations across five different static word embedding architectures.
Our results reveal that the choice of the context learning flavor during embedding training (CBOW vs skip-gram) impacts the word association distinguishability and word embeddings' sensitivity to deviations in the training corpora.
arXiv Detail & Related papers (2020-12-14T22:27:18Z) - Cross-lingual Word Sense Disambiguation using mBERT Embeddings with
Syntactic Dependencies [0.0]
Cross-lingual word sense disambiguation (WSD) tackles the challenge of disambiguating ambiguous words across languages given context.
BERT embedding model has been proven to be effective in contextual information of words.
This project investigates how syntactic information can be added into the BERT embeddings to result in both semantics- and syntax-incorporated word embeddings.
arXiv Detail & Related papers (2020-12-09T20:22:11Z) - Consensus-Aware Visual-Semantic Embedding for Image-Text Matching [69.34076386926984]
Image-text matching plays a central role in bridging vision and language.
Most existing approaches only rely on the image-text instance pair to learn their representations.
We propose a Consensus-aware Visual-Semantic Embedding model to incorporate the consensus information.
arXiv Detail & Related papers (2020-07-17T10:22:57Z) - Multiplex Word Embeddings for Selectional Preference Acquisition [70.33531759861111]
We propose a multiplex word embedding model, which can be easily extended according to various relations among words.
Our model can effectively distinguish words with respect to different relations without introducing unnecessary sparseness.
arXiv Detail & Related papers (2020-01-09T04:47:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.