FGSI: Distant Supervision for Relation Extraction method based on
Fine-Grained Semantic Information
- URL: http://arxiv.org/abs/2302.02078v2
- Date: Sat, 18 Mar 2023 08:29:52 GMT
- Title: FGSI: Distant Supervision for Relation Extraction method based on
Fine-Grained Semantic Information
- Authors: Chenghong Sun, Weidong Ji, Guohui Zhou, Hui Guo, Zengxiang Yin and
Yuqi Yue
- Abstract summary: Key semantic information within a sentence plays a key role in the relationship extraction of entities.
We propose the hypothesis that the key semantic information inside the sentence plays a key role in entity relationship extraction.
The proposed relational extraction model can make full use of the available positive semantic information.
- Score: 2.6587175537360137
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The main purpose of relation extraction is to extract the semantic
relationships between tagged pairs of entities in a sentence, which plays an
important role in the semantic understanding of sentences and the construction
of knowledge graphs. In this paper, we propose that the key semantic
information within a sentence plays a key role in the relationship extraction
of entities. We propose the hypothesis that the key semantic information inside
the sentence plays a key role in entity relationship extraction. And based on
this hypothesis, we split the sentence into three segments according to the
location of the entity from the inside of the sentence, and find the
fine-grained semantic features inside the sentence through the intra-sentence
attention mechanism to reduce the interference of irrelevant noise information.
The proposed relational extraction model can make full use of the available
positive semantic information. The experimental results show that the proposed
relation extraction model improves the accuracy-recall curves and P@N values
compared with existing methods, which proves the effectiveness of this model.
Related papers
- A Bi-consolidating Model for Joint Relational Triple Extraction [3.972061685570092]
Current methods to extract relational triples directly make a prediction based on a possible entity pair in a raw sentence without depending on entity recognition.
The task suffers from a serious semantic overlapping problem, in which several relation triples may share one or two entities in a sentence.
A bi-consolidating model is proposed to address this problem by simultaneously reinforcing the local and global semantic features relevant to a relation triple.
arXiv Detail & Related papers (2024-04-05T04:04:23Z) - Relation Extraction Model Based on Semantic Enhancement Mechanism [19.700119359495663]
CasAug model proposed in this paper based the CaselR framework combined with the enhancement mechanism.
The experimental results show that, compared with the baseline model, the CasAug model proposed in this paper has improved the effect of relation extraction.
arXiv Detail & Related papers (2023-11-05T04:40:39Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - Towards Relation Extraction From Speech [56.36416922396724]
We propose a new listening information extraction task, i.e., speech relation extraction.
We construct the training dataset for speech relation extraction via text-to-speech systems, and we construct the testing dataset via crowd-sourcing with native English speakers.
We conduct comprehensive experiments to distinguish the challenges in speech relation extraction, which may shed light on future explorations.
arXiv Detail & Related papers (2022-10-17T05:53:49Z) - HiURE: Hierarchical Exemplar Contrastive Learning for Unsupervised
Relation Extraction [60.80849503639896]
Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution.
We propose a novel contrastive learning framework named HiURE, which has the capability to derive hierarchical signals from relational feature space using cross hierarchy attention.
Experimental results on two public datasets demonstrate the advanced effectiveness and robustness of HiURE on unsupervised relation extraction when compared with state-of-the-art models.
arXiv Detail & Related papers (2022-05-04T17:56:48Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - D-REX: Dialogue Relation Extraction with Explanations [65.3862263565638]
This work focuses on extracting explanations that indicate that a relation exists while using only partially labeled data.
We propose our model-agnostic framework, D-REX, a policy-guided semi-supervised algorithm that explains and ranks relations.
We find that about 90% of the time, human annotators prefer D-REX's explanations over a strong BERT-based joint relation extraction and explanation model.
arXiv Detail & Related papers (2021-09-10T22:30:48Z) - Improving Sentence-Level Relation Extraction through Curriculum Learning [7.117139527865022]
We propose a curriculum learning-based relation extraction model that split data by difficulty and utilize it for learning.
In the experiments with the representative sentence-level relation extraction datasets, TACRED and Re-TACRED, the proposed method showed good performances.
arXiv Detail & Related papers (2021-07-20T08:44:40Z) - ASPER: Attention-based Approach to Extract Syntactic Patterns denoting
Semantic Relations in Sentential Context [2.175490119265481]
We propose an attention-based supervised deep learning model, ASPER, which extracts syntactic patterns between entities exhibiting a given semantic relation in the sentential context.
We validate the performance of ASPER on three distinct semantic relations -- hyponym-hypernym, cause-effect, and meronym-holonym on six datasets.
For all these semantic relations, ASPER can automatically identify a collection of syntactic patterns reflecting the existence of such a relation between a pair of entities in a sentence.
arXiv Detail & Related papers (2021-04-04T02:36:19Z) - Unsupervised Extractive Summarization using Pointwise Mutual Information [5.544401446569243]
We propose new metrics of relevance and redundancy using pointwise mutual information (PMI) between sentences.
We show that our method outperforms similarity-based methods on datasets in a range of domains including news, medical journal articles, and personal anecdotes.
arXiv Detail & Related papers (2021-02-11T21:05:50Z) - SelfORE: Self-supervised Relational Feature Learning for Open Relation
Extraction [60.08464995629325]
Open-domain relation extraction is the task of extracting open-domain relation facts from natural language sentences.
We proposed a self-supervised framework named SelfORE, which exploits weak, self-supervised signals.
Experimental results on three datasets show the effectiveness and robustness of SelfORE.
arXiv Detail & Related papers (2020-04-06T07:23:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.