Learning from Context or Names? An Empirical Study on Neural Relation
Extraction
- URL: http://arxiv.org/abs/2010.01923v2
- Date: Tue, 1 Dec 2020 04:10:37 GMT
- Title: Learning from Context or Names? An Empirical Study on Neural Relation
Extraction
- Authors: Hao Peng, Tianyu Gao, Xu Han, Yankai Lin, Peng Li, Zhiyuan Liu,
Maosong Sun, Jie Zhou
- Abstract summary: We study the effect of two main information sources in text: textual context and entity mentions (names)
We propose an entity-masked contrastive pre-training framework for relation extraction (RE)
Our framework can improve the effectiveness and robustness of neural models in different RE scenarios.
- Score: 112.06614505580501
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural models have achieved remarkable success on relation extraction (RE)
benchmarks. However, there is no clear understanding which type of information
affects existing RE models to make decisions and how to further improve the
performance of these models. To this end, we empirically study the effect of
two main information sources in text: textual context and entity mentions
(names). We find that (i) while context is the main source to support the
predictions, RE models also heavily rely on the information from entity
mentions, most of which is type information, and (ii) existing datasets may
leak shallow heuristics via entity mentions and thus contribute to the high
performance on RE benchmarks. Based on the analyses, we propose an
entity-masked contrastive pre-training framework for RE to gain a deeper
understanding on both textual context and type information while avoiding rote
memorization of entities or use of superficial cues in mentions. We carry out
extensive experiments to support our views, and show that our framework can
improve the effectiveness and robustness of neural models in different RE
scenarios. All the code and datasets are released at
https://github.com/thunlp/RE-Context-or-Names.
Related papers
- Enriching Relation Extraction with OpenIE [70.52564277675056]
Relation extraction (RE) is a sub-discipline of information extraction (IE)
In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE.
Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models.
arXiv Detail & Related papers (2022-12-19T11:26:23Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - Should We Rely on Entity Mentions for Relation Extraction? Debiasing
Relation Extraction with Counterfactual Analysis [60.83756368501083]
We propose the CORE (Counterfactual Analysis based Relation Extraction) debiasing method for sentence-level relation extraction.
Our CORE method is model-agnostic to debias existing RE systems during inference without changing their training processes.
arXiv Detail & Related papers (2022-05-08T05:13:54Z) - Automatically Generating Counterfactuals for Relation Exaction [18.740447044960796]
relation extraction (RE) is a fundamental task in natural language processing.
Current deep neural models have achieved high accuracy but are easily affected by spurious correlations.
We develop a novel approach to derive contextual counterfactuals for entities.
arXiv Detail & Related papers (2022-02-22T04:46:10Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - Probing Linguistic Features of Sentence-Level Representations in Neural
Relation Extraction [80.38130122127882]
We introduce 14 probing tasks targeting linguistic properties relevant to neural relation extraction (RE)
We use them to study representations learned by more than 40 different encoder architecture and linguistic feature combinations trained on two datasets.
We find that the bias induced by the architecture and the inclusion of linguistic features are clearly expressed in the probing task performance.
arXiv Detail & Related papers (2020-04-17T09:17:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.