Automatically Generating Counterfactuals for Relation Exaction
- URL: http://arxiv.org/abs/2202.10668v1
- Date: Tue, 22 Feb 2022 04:46:10 GMT
- Title: Automatically Generating Counterfactuals for Relation Exaction
- Authors: Mi Zhang and Tieyun Qian
- Abstract summary: relation extraction (RE) is a fundamental task in natural language processing.
Current deep neural models have achieved high accuracy but are easily affected by spurious correlations.
We develop a novel approach to derive contextual counterfactuals for entities.
- Score: 18.740447044960796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of relation extraction (RE) is to extract the semantic relations
between/among entities in the text. As a fundamental task in natural language
processing, it is crucial to ensure the robustness of RE models. Despite the
high accuracy current deep neural models have achieved in RE tasks, they are
easily affected by spurious correlations. One solution to this problem is to
train the model with counterfactually augmented data (CAD) such that it can
learn the causation rather than the confounding. However, no attempt has been
made on generating counterfactuals for RE tasks. In this paper, we formulate
the problem of automatically generating CAD for RE tasks from an entity-centric
viewpoint, and develop a novel approach to derive contextual counterfactuals
for entities. Specifically, we exploit two elementary topological properties,
i.e., the centrality and the shortest path, in syntactic and semantic
dependency graphs, to first identify and then intervene on the contextual
causal features for entities. We conduct a comprehensive evaluation on four RE
datasets by combining our proposed approach with a variety of backbone RE
models. The results demonstrate that our approach not only improves the
performance of the backbones, but also makes them more robust in the
out-of-domain test.
Related papers
- Relation Extraction with Fine-Tuned Large Language Models in Retrieval Augmented Generation Frameworks [0.0]
Relation Extraction (RE) is crucial for converting unstructured data into structured formats like Knowledge Graphs (KGs)
Recent studies leveraging pre-trained language models (PLMs) have shown significant success in this area.
This work explores the performance of fine-tuned LLMs and their integration into the Retrieval Augmented-based (RAG) RE approach.
arXiv Detail & Related papers (2024-06-20T21:27:57Z) - How Fragile is Relation Extraction under Entity Replacements? [70.34001923252711]
Relation extraction (RE) aims to extract the relations between entity names from the textual context.
Existing work has found that the RE models the entity name patterns to make RE predictions while ignoring the textual context.
This motivates us to raise the question: are RE models robust to the entity replacements?''
arXiv Detail & Related papers (2023-05-22T23:53:32Z) - Silver Syntax Pre-training for Cross-Domain Relation Extraction [20.603482820770356]
Relation Extraction (RE) remains a challenging task, especially when considering realistic out-of-domain evaluations.
obtaining high-quality (manually annotated) data is extremely expensive and cannot realistically be repeated for each new domain.
An intermediate training step on data from related tasks has shown to be beneficial across many NLP tasks.However, this setup still requires supplementary annotated data, which is often not available.
In this paper, we investigate intermediate pre-training specifically for RE. We exploit the affinity between syntactic structure and semantic RE, and identify the syntactic relations closely related to RE by being on the shortest dependency path between two entities
arXiv Detail & Related papers (2023-05-18T14:49:19Z) - Continual Contrastive Finetuning Improves Low-Resource Relation
Extraction [34.76128090845668]
Relation extraction has been particularly challenging in low-resource scenarios and domains.
Recent literature has tackled low-resource RE by self-supervised learning.
We propose to pretrain and finetune the RE model using consistent objectives of contrastive learning.
arXiv Detail & Related papers (2022-12-21T07:30:22Z) - Should We Rely on Entity Mentions for Relation Extraction? Debiasing
Relation Extraction with Counterfactual Analysis [60.83756368501083]
We propose the CORE (Counterfactual Analysis based Relation Extraction) debiasing method for sentence-level relation extraction.
Our CORE method is model-agnostic to debias existing RE systems during inference without changing their training processes.
arXiv Detail & Related papers (2022-05-08T05:13:54Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - Learning from Context or Names? An Empirical Study on Neural Relation
Extraction [112.06614505580501]
We study the effect of two main information sources in text: textual context and entity mentions (names)
We propose an entity-masked contrastive pre-training framework for relation extraction (RE)
Our framework can improve the effectiveness and robustness of neural models in different RE scenarios.
arXiv Detail & Related papers (2020-10-05T11:21:59Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - Probing Linguistic Features of Sentence-Level Representations in Neural
Relation Extraction [80.38130122127882]
We introduce 14 probing tasks targeting linguistic properties relevant to neural relation extraction (RE)
We use them to study representations learned by more than 40 different encoder architecture and linguistic feature combinations trained on two datasets.
We find that the bias induced by the architecture and the inclusion of linguistic features are clearly expressed in the probing task performance.
arXiv Detail & Related papers (2020-04-17T09:17:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.