Leveraging Knowledge Graph Embeddings to Enhance Contextual
Representations for Relation Extraction
- URL: http://arxiv.org/abs/2306.04203v1
- Date: Wed, 7 Jun 2023 07:15:20 GMT
- Title: Leveraging Knowledge Graph Embeddings to Enhance Contextual
Representations for Relation Extraction
- Authors: Fr\'ejus A. A. Laleye, Lo\"ic Rakotoson, Sylvain Massip
- Abstract summary: We propose a relation extraction approach based on the incorporation of pretrained knowledge graph embeddings at the corpus scale into the sentence-level contextual representation.
We conducted a series of experiments which revealed promising and very interesting results for our proposed approach.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Relation extraction task is a crucial and challenging aspect of Natural
Language Processing. Several methods have surfaced as of late, exhibiting
notable performance in addressing the task; however, most of these approaches
rely on vast amounts of data from large-scale knowledge graphs or language
models pretrained on voluminous corpora. In this paper, we hone in on the
effective utilization of solely the knowledge supplied by a corpus to create a
high-performing model. Our objective is to showcase that by leveraging the
hierarchical structure and relational distribution of entities within a corpus
without introducing external knowledge, a relation extraction model can achieve
significantly enhanced performance. We therefore proposed a relation extraction
approach based on the incorporation of pretrained knowledge graph embeddings at
the corpus scale into the sentence-level contextual representation. We
conducted a series of experiments which revealed promising and very interesting
results for our proposed approach.The obtained results demonstrated an
outperformance of our method compared to context-based relation extraction
models.
Related papers
- Multimodal Relation Extraction with Cross-Modal Retrieval and Synthesis [89.04041100520881]
This research proposes to retrieve textual and visual evidence based on the object, sentence, and whole image.
We develop a novel approach to synthesize the object-level, image-level, and sentence-level information for better reasoning between the same and different modalities.
arXiv Detail & Related papers (2023-05-25T15:26:13Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - REKnow: Enhanced Knowledge for Joint Entity and Relation Extraction [30.829001748700637]
Relation extraction is a challenging task that aims to extract all hidden relational facts from the text.
There is no unified framework that works well under various relation extraction settings.
We propose a knowledge-enhanced generative model to mitigate these two issues.
Our model achieves superior performance on multiple benchmarks and settings, including WebNLG, NYT10, and TACRED.
arXiv Detail & Related papers (2022-06-10T13:59:38Z) - Modeling Multi-Granularity Hierarchical Features for Relation Extraction [26.852869800344813]
We propose a novel method to extract multi-granularity features based solely on the original input sentences.
We show that effective structured features can be attained even without external knowledge.
arXiv Detail & Related papers (2022-04-09T09:44:05Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - Improving Sentence-Level Relation Extraction through Curriculum Learning [7.117139527865022]
We propose a curriculum learning-based relation extraction model that split data by difficulty and utilize it for learning.
In the experiments with the representative sentence-level relation extraction datasets, TACRED and Re-TACRED, the proposed method showed good performances.
arXiv Detail & Related papers (2021-07-20T08:44:40Z) - Improving BERT Model Using Contrastive Learning for Biomedical Relation
Extraction [13.354066085659198]
Contrastive learning is not widely utilized in natural language processing due to the lack of a general method of data augmentation for text data.
In this work, we explore the method of employing contrastive learning to improve the text representation from the BERT model for relation extraction.
The experimental results on three relation extraction benchmark datasets demonstrate that our method can improve the BERT model representation and achieve state-of-the-art performance.
arXiv Detail & Related papers (2021-04-28T17:50:24Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z) - A Dependency Syntactic Knowledge Augmented Interactive Architecture for
End-to-End Aspect-based Sentiment Analysis [73.74885246830611]
We propose a novel dependency syntactic knowledge augmented interactive architecture with multi-task learning for end-to-end ABSA.
This model is capable of fully exploiting the syntactic knowledge (dependency relations and types) by leveraging a well-designed Dependency Relation Embedded Graph Convolutional Network (DreGcn)
Extensive experimental results on three benchmark datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-04T14:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.