Knowledge Graph Enhanced Event Extraction in Financial Documents
- URL: http://arxiv.org/abs/2109.02592v1
- Date: Mon, 6 Sep 2021 16:35:15 GMT
- Title: Knowledge Graph Enhanced Event Extraction in Financial Documents
- Authors: Kaihao Guo, Tianpei Jiang, Haipeng Zhang
- Abstract summary: We propose a first event extraction framework that embeds a knowledge graph through a Graph Neural Network.
For extracting events from Chinese financial announcements, our method outperforms the state-of-the-art method by 5.3% in F1-score.
- Score: 0.12891210250935145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event extraction is a classic task in natural language processing with wide
use in handling large amount of yet rapidly growing financial, legal, medical,
and government documents which often contain multiple events with their
elements scattered and mixed across the documents, making the problem much more
difficult. Though the underlying relations between event elements to be
extracted provide helpful contextual information, they are somehow overlooked
in prior studies. We showcase the enhancement to this task brought by utilizing
the knowledge graph that captures entity relations and their attributes. We
propose a first event extraction framework that embeds a knowledge graph
through a Graph Neural Network and integrates the embedding with regular
features, all at document-level. Specifically, for extracting events from
Chinese financial announcements, our method outperforms the state-of-the-art
method by 5.3% in F1-score.
Related papers
- Comprehensive Event Representations using Event Knowledge Graphs and
Natural Language Processing [0.0]
This work seeks to utilise and build on the growing body of work that uses findings from the field of natural language processing (NLP) to extract knowledge from text and build knowledge graphs.
Specifically, sub-event extraction is used as a way of creating sub-event-aware event representations.
These event representations are enriched through fine-grained location extraction and contextualised through the alignment of historically relevant quotes.
arXiv Detail & Related papers (2023-03-08T18:43:39Z) - TRIE++: Towards End-to-End Information Extraction from Visually Rich
Documents [51.744527199305445]
This paper proposes a unified end-to-end information extraction framework from visually rich documents.
Text reading and information extraction can reinforce each other via a well-designed multi-modal context block.
The framework can be trained in an end-to-end trainable manner, achieving global optimization.
arXiv Detail & Related papers (2022-07-14T08:52:07Z) - CLIP-Event: Connecting Text and Images with Event Structures [123.31452120399827]
We propose a contrastive learning framework to enforce vision-language pretraining models.
We take advantage of text information extraction technologies to obtain event structural knowledge.
Experiments show that our zero-shot CLIP-Event outperforms the state-of-the-art supervised model in argument extraction.
arXiv Detail & Related papers (2022-01-13T17:03:57Z) - Timestamping Documents and Beliefs [1.4467794332678539]
Document dating is a challenging problem which requires inference over the temporal structure of the document.
In this paper we propose NeuralDater, a Graph Convolutional Network (GCN) based document dating approach.
We also propose AD3: Attentive Deep Document Dater, an attention-based document dating system.
arXiv Detail & Related papers (2021-06-09T02:12:18Z) - Big Green at WNUT 2020 Shared Task-1: Relation Extraction as
Contextualized Sequence Classification [2.1574781022415364]
We introduce a system which uses contextualized knowledge graph completion to classify relations and events between known entities in a noisy text environment.
We report results which show that our system is able to effectively extract relations and events from a dataset of wet lab protocols.
arXiv Detail & Related papers (2020-12-07T06:38:53Z) - CoLAKE: Contextualized Language and Knowledge Embedding [81.90416952762803]
We propose the Contextualized Language and Knowledge Embedding (CoLAKE)
CoLAKE jointly learns contextualized representation for both language and knowledge with the extended objective.
We conduct experiments on knowledge-driven tasks, knowledge probing tasks, and language understanding tasks.
arXiv Detail & Related papers (2020-10-01T11:39:32Z) - Extracting Summary Knowledge Graphs from Long Documents [48.92130466606231]
We introduce a new text-to-graph task of predicting summarized knowledge graphs from long documents.
We develop a dataset of 200k document/graph pairs using automatic and human annotations.
arXiv Detail & Related papers (2020-09-19T04:37:33Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.