UniRE: A Unified Label Space for Entity Relation Extraction
- URL: http://arxiv.org/abs/2107.04292v1
- Date: Fri, 9 Jul 2021 08:09:37 GMT
- Title: UniRE: A Unified Label Space for Entity Relation Extraction
- Authors: Yijun Wang, Changzhi Sun, Yuanbin Wu, Hao Zhou, Lei Li, and Junchi Yan
- Abstract summary: Joint entity relation extraction models setup two separated label spaces for the two sub-tasks.
We argue that this setting may hinder the information interaction between entities and relations.
In this work, we propose to eliminate the different treatment on the two sub-tasks' label spaces.
- Score: 67.53850477281058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many joint entity relation extraction models setup two separated label spaces
for the two sub-tasks (i.e., entity detection and relation classification). We
argue that this setting may hinder the information interaction between entities
and relations. In this work, we propose to eliminate the different treatment on
the two sub-tasks' label spaces. The input of our model is a table containing
all word pairs from a sentence. Entities and relations are represented by
squares and rectangles in the table. We apply a unified classifier to predict
each cell's label, which unifies the learning of two sub-tasks. For testing, an
effective (yet fast) approximate decoder is proposed for finding squares and
rectangles from tables. Experiments on three benchmarks (ACE04, ACE05, SciERC)
show that, using only half the number of parameters, our model achieves
competitive accuracy with the best extractor, and is faster.
Related papers
- SEMv3: A Fast and Robust Approach to Table Separation Line Detection [48.75713662571455]
Table structure recognition (TSR) aims to parse the inherent structure of a table from its input image.
"Split-and-merge" paradigm is a pivotal approach to parse table structure, where the table separation line detection is crucial.
We propose SEMv3 (SEM: Split, Embed and Merge), a method that is both fast and robust for detecting table separation lines.
arXiv Detail & Related papers (2024-05-20T08:13:46Z) - From Charts to Atlas: Merging Latent Spaces into One [15.47502439734611]
Models trained on semantically related datasets and tasks exhibit comparable inter-sample relations within their latent spaces.
We introduce Relative Latent Space Aggregation, a two-step approach that first renders the spaces comparable using relative representations, and then aggregates them via a simple mean.
We compare the aggregated space with that derived from an end-to-end model trained over all tasks and show that the two spaces are similar.
arXiv Detail & Related papers (2023-11-11T11:51:41Z) - From Alignment to Entailment: A Unified Textual Entailment Framework for
Entity Alignment [17.70562397382911]
Existing methods usually encode the triples of entities as embeddings and learn to align the embeddings.
We transform both triples into unified textual sequences, and model the EA task as a bi-directional textual entailment task.
Our approach captures the unified correlation pattern of two kinds of information between entities, and explicitly models the fine-grained interaction between original entity information.
arXiv Detail & Related papers (2023-05-19T08:06:50Z) - SEMv2: Table Separation Line Detection Based on Instance Segmentation [96.36188168694781]
We propose an accurate table structure recognizer, termed SEMv2 (SEM: Split, Embed and Merge)
We address the table separation line instance-level discrimination problem and introduce a table separation line detection strategy based on conditional convolution.
To comprehensively evaluate the SEMv2, we also present a more challenging dataset for table structure recognition, dubbed iFLYTAB.
arXiv Detail & Related papers (2023-03-08T05:15:01Z) - AWTE-BERT:Attending to Wordpiece Tokenization Explicitly on BERT for
Joint Intent Classification and SlotFilling [5.684659127683238]
BERT (Bidirectional Representations from Transformers) achieves the joint optimization of the two tasks.
We propose a novel joint model based on BERT, which explicitly models the multiple sub-tokens features after wordpiece tokenization.
Experimental results demonstrate that our proposed model achieves significant improvement on intent classification accuracy, slot filling F1, and sentence-level semantic frame accuracy.
arXiv Detail & Related papers (2022-11-27T13:49:19Z) - Text Summarization with Oracle Expectation [88.39032981994535]
Extractive summarization produces summaries by identifying and concatenating the most important sentences in a document.
Most summarization datasets do not come with gold labels indicating whether document sentences are summary-worthy.
We propose a simple yet effective labeling algorithm that creates soft, expectation-based sentence labels.
arXiv Detail & Related papers (2022-09-26T14:10:08Z) - RelationPrompt: Leveraging Prompts to Generate Synthetic Data for
Zero-Shot Relation Triplet Extraction [65.4337085607711]
We introduce the task setting of Zero-Shot Relation Triplet Extraction (ZeroRTE)
Given an input sentence, each extracted triplet consists of the head entity, relation label, and tail entity where the relation label is not seen at the training stage.
We propose to synthesize relation examples by prompting language models to generate structured texts.
arXiv Detail & Related papers (2022-03-17T05:55:14Z) - A Cascade Dual-Decoder Model for Joint Entity and Relation Extraction [18.66493402386152]
We propose an effective cascade dual-decoder method to extract overlapping relational triples.
Our approach is straightforward and it includes a text-specific relation decoder and a relation-corresponded entity decoder.
We conducted experiments on a real-world open-pit mine dataset and two public datasets to verify the method's generalizability.
arXiv Detail & Related papers (2021-06-27T07:42:05Z) - R$^2$-Net: Relation of Relation Learning Network for Sentence Semantic
Matching [58.72111690643359]
We propose a Relation of Relation Learning Network (R2-Net) for sentence semantic matching.
We first employ BERT to encode the input sentences from a global perspective.
Then a CNN-based encoder is designed to capture keywords and phrase information from a local perspective.
To fully leverage labels for better relation information extraction, we introduce a self-supervised relation of relation classification task.
arXiv Detail & Related papers (2020-12-16T13:11:30Z) - Semantic Labeling Using a Deep Contextualized Language Model [9.719972529205101]
We propose a context-aware semantic labeling method using both the column values and context.
Our new method is based on a new setting for semantic labeling, where we sequentially predict labels for an input table with missing headers.
To our knowledge, we are the first to successfully apply BERT to solve the semantic labeling task.
arXiv Detail & Related papers (2020-10-30T03:04:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.