How Fragile is Relation Extraction under Entity Replacements?
- URL: http://arxiv.org/abs/2305.13551v3
- Date: Tue, 7 May 2024 16:22:07 GMT
- Title: How Fragile is Relation Extraction under Entity Replacements?
- Authors: Yiwei Wang, Bryan Hooi, Fei Wang, Yujun Cai, Yuxuan Liang, Wenxuan Zhou, Jing Tang, Manjuan Duan, Muhao Chen,
- Abstract summary: Relation extraction (RE) aims to extract the relations between entity names from the textual context.
Existing work has found that the RE models the entity name patterns to make RE predictions while ignoring the textual context.
This motivates us to raise the question: are RE models robust to the entity replacements?''
- Score: 70.34001923252711
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Relation extraction (RE) aims to extract the relations between entity names from the textual context. In principle, textual context determines the ground-truth relation and the RE models should be able to correctly identify the relations reflected by the textual context. However, existing work has found that the RE models memorize the entity name patterns to make RE predictions while ignoring the textual context. This motivates us to raise the question: ``are RE models robust to the entity replacements?'' In this work, we operate the random and type-constrained entity replacements over the RE instances in TACRED and evaluate the state-of-the-art RE models under the entity replacements. We observe the 30\% - 50\% F1 score drops on the state-of-the-art RE models under entity replacements. These results suggest that we need more efforts to develop effective RE models robust to entity replacements. We release the source code at https://github.com/wangywUST/RobustRE.
Related papers
- Grasping the Essentials: Tailoring Large Language Models for Zero-Shot Relation Extraction [33.528688487954454]
Relation extraction (RE) aims to identify semantic relationships between entities within text.
Few-shot learning, aiming to lessen annotation demands, typically provides incomplete and biased supervision for target relations.
We introduce REPaL, comprising three stages: (1) We leverage large language models (LLMs) to generate initial seed instances from relation definitions and an unlabeled corpus.
arXiv Detail & Related papers (2024-02-17T00:20:06Z) - Think Rationally about What You See: Continuous Rationale Extraction for
Relation Extraction [86.90265683679469]
Relation extraction aims to extract potential relations according to the context of two entities.
We propose a novel rationale extraction framework named RE2, which leverages two continuity and sparsity factors.
Experiments on four datasets show that RE2 surpasses baselines.
arXiv Detail & Related papers (2023-05-02T03:52:34Z) - An Overview of Distant Supervision for Relation Extraction with a Focus
on Denoising and Pre-training Methods [0.0]
Relation Extraction is a foundational task of natural language processing.
The history of RE methods can be roughly organized into four phases: pattern-based RE, statistical-based RE, neural-based RE, and large language model-based RE.
arXiv Detail & Related papers (2022-07-17T21:02:04Z) - Should We Rely on Entity Mentions for Relation Extraction? Debiasing
Relation Extraction with Counterfactual Analysis [60.83756368501083]
We propose the CORE (Counterfactual Analysis based Relation Extraction) debiasing method for sentence-level relation extraction.
Our CORE method is model-agnostic to debias existing RE systems during inference without changing their training processes.
arXiv Detail & Related papers (2022-05-08T05:13:54Z) - Does Recommend-Revise Produce Reliable Annotations? An Analysis on
Missing Instances in DocRED [60.39125850987604]
We show that a textit-revise scheme results in false negative samples and an obvious bias towards popular entities and relations.
The relabeled dataset is released to serve as a more reliable test set of document RE models.
arXiv Detail & Related papers (2022-04-17T11:29:01Z) - Automatically Generating Counterfactuals for Relation Exaction [18.740447044960796]
relation extraction (RE) is a fundamental task in natural language processing.
Current deep neural models have achieved high accuracy but are easily affected by spurious correlations.
We develop a novel approach to derive contextual counterfactuals for entities.
arXiv Detail & Related papers (2022-02-22T04:46:10Z) - Learning from Context or Names? An Empirical Study on Neural Relation
Extraction [112.06614505580501]
We study the effect of two main information sources in text: textual context and entity mentions (names)
We propose an entity-masked contrastive pre-training framework for relation extraction (RE)
Our framework can improve the effectiveness and robustness of neural models in different RE scenarios.
arXiv Detail & Related papers (2020-10-05T11:21:59Z) - Relation of the Relations: A New Paradigm of the Relation Extraction
Problem [52.21210549224131]
We propose a new paradigm of Relation Extraction (RE) that considers as a whole the predictions of all relations in the same context.
We develop a data-driven approach that does not require hand-crafted rules but learns by itself the relation of relations (RoR) using Graph Neural Networks and a relation matrix transformer.
Experiments show that our model outperforms the state-of-the-art approaches by +1.12% on the ACE05 dataset and +2.55% on SemEval 2018 Task 7.2.
arXiv Detail & Related papers (2020-06-05T22:25:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.