Modeling Multi-Granularity Hierarchical Features for Relation Extraction
- URL: http://arxiv.org/abs/2204.04437v1
- Date: Sat, 9 Apr 2022 09:44:05 GMT
- Title: Modeling Multi-Granularity Hierarchical Features for Relation Extraction
- Authors: Xinnian Liang, Shuangzhi Wu, Mu Li, Zhoujun Li
- Abstract summary: We propose a novel method to extract multi-granularity features based solely on the original input sentences.
We show that effective structured features can be attained even without external knowledge.
- Score: 26.852869800344813
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Relation extraction is a key task in Natural Language Processing (NLP), which
aims to extract relations between entity pairs from given texts. Recently,
relation extraction (RE) has achieved remarkable progress with the development
of deep neural networks. Most existing research focuses on constructing
explicit structured features using external knowledge such as knowledge graph
and dependency tree. In this paper, we propose a novel method to extract
multi-granularity features based solely on the original input sentences. We
show that effective structured features can be attained even without external
knowledge. Three kinds of features based on the input sentences are fully
exploited, which are in entity mention level, segment level, and sentence
level. All the three are jointly and hierarchically modeled. We evaluate our
method on three public benchmarks: SemEval 2010 Task 8, Tacred, and Tacred
Revisited. To verify the effectiveness, we apply our method to different
encoders such as LSTM and BERT. Experimental results show that our method
significantly outperforms existing state-of-the-art models that even use
external knowledge. Extensive analyses demonstrate that the performance of our
model is contributed by the capture of multi-granularity features and the model
of their hierarchical structure. Code and data are available at
\url{https://github.com/xnliang98/sms}.
Related papers
- Deep Sparse Latent Feature Models for Knowledge Graph Completion [24.342670268545085]
In this paper, we introduce a novel framework of sparse latent feature models for knowledge graphs.
Our approach not only effectively completes missing triples but also provides clear interpretability of the latent structures.
Our method significantly improves performance by revealing latent communities and producing interpretable representations.
arXiv Detail & Related papers (2024-11-24T03:17:37Z) - Learning to Extract Structured Entities Using Language Models [52.281701191329]
Recent advances in machine learning have significantly impacted the field of information extraction.
We reformulate the task to be entity-centric, enabling the use of diverse metrics.
We contribute to the field by introducing Structured Entity Extraction and proposing the Approximate Entity Set OverlaP metric.
arXiv Detail & Related papers (2024-02-06T22:15:09Z) - Improving Open Information Extraction with Large Language Models: A
Study on Demonstration Uncertainty [52.72790059506241]
Open Information Extraction (OIE) task aims at extracting structured facts from unstructured text.
Despite the potential of large language models (LLMs) like ChatGPT as a general task solver, they lag behind state-of-the-art (supervised) methods in OIE tasks.
arXiv Detail & Related papers (2023-09-07T01:35:24Z) - Leveraging Knowledge Graph Embeddings to Enhance Contextual
Representations for Relation Extraction [0.0]
We propose a relation extraction approach based on the incorporation of pretrained knowledge graph embeddings at the corpus scale into the sentence-level contextual representation.
We conducted a series of experiments which revealed promising and very interesting results for our proposed approach.
arXiv Detail & Related papers (2023-06-07T07:15:20Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - REKnow: Enhanced Knowledge for Joint Entity and Relation Extraction [30.829001748700637]
Relation extraction is a challenging task that aims to extract all hidden relational facts from the text.
There is no unified framework that works well under various relation extraction settings.
We propose a knowledge-enhanced generative model to mitigate these two issues.
Our model achieves superior performance on multiple benchmarks and settings, including WebNLG, NYT10, and TACRED.
arXiv Detail & Related papers (2022-06-10T13:59:38Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - Probing Linguistic Features of Sentence-Level Representations in Neural
Relation Extraction [80.38130122127882]
We introduce 14 probing tasks targeting linguistic properties relevant to neural relation extraction (RE)
We use them to study representations learned by more than 40 different encoder architecture and linguistic feature combinations trained on two datasets.
We find that the bias induced by the architecture and the inclusion of linguistic features are clearly expressed in the probing task performance.
arXiv Detail & Related papers (2020-04-17T09:17:40Z) - A Dependency Syntactic Knowledge Augmented Interactive Architecture for
End-to-End Aspect-based Sentiment Analysis [73.74885246830611]
We propose a novel dependency syntactic knowledge augmented interactive architecture with multi-task learning for end-to-end ABSA.
This model is capable of fully exploiting the syntactic knowledge (dependency relations and types) by leveraging a well-designed Dependency Relation Embedded Graph Convolutional Network (DreGcn)
Extensive experimental results on three benchmark datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-04T14:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.