EnriCo: Enriched Representation and Globally Constrained Inference for Entity and Relation Extraction
- URL: http://arxiv.org/abs/2404.12493v1
- Date: Thu, 18 Apr 2024 20:15:48 GMT
- Title: EnriCo: Enriched Representation and Globally Constrained Inference for Entity and Relation Extraction
- Authors: Urchade Zaratiana, Nadi Tomeh, Yann Dauxais, Pierre Holat, Thierry Charnois,
- Abstract summary: Joint entity and relation extraction plays a pivotal role in various applications, notably in the construction of knowledge graphs.
Existing approaches often fall short of two key aspects: richness of representation and coherence in output structure.
In our work, we introduce EnriCo, which mitigates these shortcomings.
- Score: 3.579132482505273
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Joint entity and relation extraction plays a pivotal role in various applications, notably in the construction of knowledge graphs. Despite recent progress, existing approaches often fall short in two key aspects: richness of representation and coherence in output structure. These models often rely on handcrafted heuristics for computing entity and relation representations, potentially leading to loss of crucial information. Furthermore, they disregard task and/or dataset-specific constraints, resulting in output structures that lack coherence. In our work, we introduce EnriCo, which mitigates these shortcomings. Firstly, to foster rich and expressive representation, our model leverage attention mechanisms that allow both entities and relations to dynamically determine the pertinent information required for accurate extraction. Secondly, we introduce a series of decoding algorithms designed to infer the highest scoring solutions while adhering to task and dataset-specific constraints, thus promoting structured and coherent outputs. Our model demonstrates competitive performance compared to baselines when evaluated on Joint IE datasets.
Related papers
- CARE: Co-Attention Network for Joint Entity and Relation Extraction [0.0]
We propose a Co-Attention network for joint entity and relation extraction.
Our approach includes adopting a parallel encoding strategy to learn separate representations for each subtask.
At the core of our approach is the co-attention module that captures two-way interaction between the two subtasks.
arXiv Detail & Related papers (2023-08-24T03:40:54Z) - Integrating a Heterogeneous Graph with Entity-aware Self-attention using
Relative Position Labels for Reading Comprehension Model [14.721615285883429]
We introduce a novel attention pattern that integrates reasoning knowledge derived from a heterogeneous graph into the transformer architecture without relying on external knowledge.
The proposed attention pattern comprises three key elements: global-local attention for word tokens, graph attention for entity tokens that exhibit strong attention towards tokens connected in the graph, and the consideration of the type of relationship between each entity token and word token.
Our model outperforms both the cutting-edge LUKE-Graph and the baseline LUKE model across two distinct datasets.
arXiv Detail & Related papers (2023-07-19T20:17:37Z) - Sparse Relational Reasoning with Object-Centric Representations [78.83747601814669]
We investigate the composability of soft-rules learned by relational neural architectures when operating over object-centric representations.
We find that increasing sparsity, especially on features, improves the performance of some models and leads to simpler relations.
arXiv Detail & Related papers (2022-07-15T14:57:33Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - Element Intervention for Open Relation Extraction [27.408443348900057]
OpenRE aims to cluster relation instances referring to the same underlying relation.
Current OpenRE models are commonly trained on the datasets generated from distant supervision.
In this paper, we revisit the procedure of OpenRE from a causal view.
arXiv Detail & Related papers (2021-06-17T14:37:13Z) - End-to-End Hierarchical Relation Extraction for Generic Form
Understanding [0.6299766708197884]
We present a novel deep neural network to jointly perform both entity detection and link prediction.
Our model extends the Multi-stage Attentional U-Net architecture with the Part-Intensity Fields and Part-Association Fields for link prediction.
We demonstrate the effectiveness of the model on the Form Understanding in Noisy Scanned Documents dataset.
arXiv Detail & Related papers (2021-06-02T06:51:35Z) - Cross-Supervised Joint-Event-Extraction with Heterogeneous Information
Networks [61.950353376870154]
Joint-event-extraction is a sequence-to-sequence labeling task with a tag set composed of tags of triggers and entities.
We propose a Cross-Supervised Mechanism (CSM) to alternately supervise the extraction of triggers or entities.
Our approach outperforms the state-of-the-art methods in both entity and trigger extraction.
arXiv Detail & Related papers (2020-10-13T11:51:17Z) - Type-augmented Relation Prediction in Knowledge Graphs [65.88395564516115]
We propose a type-augmented relation prediction (TaRP) method, where we apply both the type information and instance-level information for relation prediction.
Our proposed TaRP method achieves significantly better performance than state-of-the-art methods on four benchmark datasets.
arXiv Detail & Related papers (2020-09-16T21:14:18Z) - SEEK: Segmented Embedding of Knowledge Graphs [77.5307592941209]
We propose a lightweight modeling framework that can achieve highly competitive relational expressiveness without increasing the model complexity.
Our framework focuses on the design of scoring functions and highlights two critical characteristics: 1) facilitating sufficient feature interactions; 2) preserving both symmetry and antisymmetry properties of relations.
arXiv Detail & Related papers (2020-05-02T15:15:50Z) - Cascaded Human-Object Interaction Recognition [175.60439054047043]
We introduce a cascade architecture for a multi-stage, coarse-to-fine HOI understanding.
At each stage, an instance localization network progressively refines HOI proposals and feeds them into an interaction recognition network.
With our carefully-designed human-centric relation features, these two modules work collaboratively towards effective interaction understanding.
arXiv Detail & Related papers (2020-03-09T17:05:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.