D-REX: Dialogue Relation Extraction with Explanations
- URL: http://arxiv.org/abs/2109.05126v1
- Date: Fri, 10 Sep 2021 22:30:48 GMT
- Title: D-REX: Dialogue Relation Extraction with Explanations
- Authors: Alon Albalak, Varun Embar, Yi-Lin Tuan, Lise Getoor, William Yang Wang
- Abstract summary: This work focuses on extracting explanations that indicate that a relation exists while using only partially labeled data.
We propose our model-agnostic framework, D-REX, a policy-guided semi-supervised algorithm that explains and ranks relations.
We find that about 90% of the time, human annotators prefer D-REX's explanations over a strong BERT-based joint relation extraction and explanation model.
- Score: 65.3862263565638
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing research studies on cross-sentence relation extraction in long-form
multi-party conversations aim to improve relation extraction without
considering the explainability of such methods. This work addresses that gap by
focusing on extracting explanations that indicate that a relation exists while
using only partially labeled data. We propose our model-agnostic framework,
D-REX, a policy-guided semi-supervised algorithm that explains and ranks
relations. We frame relation extraction as a re-ranking task and include
relation- and entity-specific explanations as an intermediate step of the
inference process. We find that about 90% of the time, human annotators prefer
D-REX's explanations over a strong BERT-based joint relation extraction and
explanation model. Finally, our evaluations on a dialogue relation extraction
dataset show that our method is simple yet effective and achieves a
state-of-the-art F1 score on relation extraction, improving upon existing
methods by 13.5%.
Related papers
- Zero-Shot Dialogue Relation Extraction by Relating Explainable Triggers
and Relation Names [28.441725610692714]
This paper proposes a method for leveraging the ability to capture triggers and relate them to previously unseen relation names.
Our experiments on a benchmark DialogRE dataset demonstrate that the proposed model achieves significant improvements for both seen and unseen relations.
arXiv Detail & Related papers (2023-06-09T07:10:01Z) - HIORE: Leveraging High-order Interactions for Unified Entity Relation
Extraction [85.80317530027212]
We propose HIORE, a new method for unified entity relation extraction.
The key insight is to leverage the complex association among word pairs, which contains richer information than the first-order word-by-word interactions.
Experiments show that HIORE achieves the state-of-the-art performance on relation extraction and an improvement of 1.11.8 F1 points over the prior best unified model.
arXiv Detail & Related papers (2023-05-07T14:57:42Z) - PCRED: Zero-shot Relation Triplet Extraction with Potential Candidate
Relation Selection and Entity Boundary Detection [11.274924966891842]
Zero-shot relation triplet extraction (ZeroRTE) aims to extract relation triplets from unstructured texts.
Previous state-of-the-art method handles this challenging task by leveraging pretrained language models to generate data as additional training samples.
We tackle this task from a new perspective and propose a novel method named PCRED for ZeroRTE with Potential Candidate Relation selection and Entity boundary Detection.
arXiv Detail & Related papers (2022-11-26T04:27:31Z) - Towards Relation Extraction From Speech [56.36416922396724]
We propose a new listening information extraction task, i.e., speech relation extraction.
We construct the training dataset for speech relation extraction via text-to-speech systems, and we construct the testing dataset via crowd-sourcing with native English speakers.
We conduct comprehensive experiments to distinguish the challenges in speech relation extraction, which may shed light on future explorations.
arXiv Detail & Related papers (2022-10-17T05:53:49Z) - RelationPrompt: Leveraging Prompts to Generate Synthetic Data for
Zero-Shot Relation Triplet Extraction [65.4337085607711]
We introduce the task setting of Zero-Shot Relation Triplet Extraction (ZeroRTE)
Given an input sentence, each extracted triplet consists of the head entity, relation label, and tail entity where the relation label is not seen at the training stage.
We propose to synthesize relation examples by prompting language models to generate structured texts.
arXiv Detail & Related papers (2022-03-17T05:55:14Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - TREND: Trigger-Enhanced Relation-Extraction Network for Dialogues [37.883583724569554]
This paper proposes TREND, a multi-tasking BERT-based model which learns to identify triggers for improving relation extraction.
The experimental results show that the proposed method achieves the state-of-the-art on the benchmark datasets.
arXiv Detail & Related papers (2021-08-31T13:04:08Z) - Eider: Evidence-enhanced Document-level Relation Extraction [56.71004595444816]
Document-level relation extraction (DocRE) aims at extracting semantic relations among entity pairs in a document.
We propose a three-stage evidence-enhanced DocRE framework consisting of joint relation and evidence extraction, evidence-centered relation extraction (RE), and fusion of extraction results.
arXiv Detail & Related papers (2021-06-16T09:43:16Z) - ZS-BERT: Towards Zero-Shot Relation Extraction with Attribute
Representation Learning [10.609715843964263]
We formulate the zero-shot relation extraction problem by incorporating the text description of seen and unseen relations.
We propose a novel multi-task learning model, zero-shot BERT, to directly predict unseen relations without hand-crafted labeling and multiple pairwise attribute classifications.
Experiments conducted on two well-known datasets exhibit that ZS-BERT can outperform existing methods by at least 13.54% improvement on F1 score.
arXiv Detail & Related papers (2021-04-10T06:53:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.