Syntactic Multi-view Learning for Open Information Extraction
- URL: http://arxiv.org/abs/2212.02068v1
- Date: Mon, 5 Dec 2022 07:15:41 GMT
- Title: Syntactic Multi-view Learning for Open Information Extraction
- Authors: Kuicai Dong, Aixin Sun, Jung-Jae Kim, Xiaoli Li
- Abstract summary: Open Information Extraction (OpenIE) aims to extracts from open-domain sentences.
In this paper, we model both constituency and dependency trees into word-level graphs.
- Score: 26.1066324477346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open Information Extraction (OpenIE) aims to extract relational tuples from
open-domain sentences. Traditional rule-based or statistical models have been
developed based on syntactic structures of sentences, identified by syntactic
parsers. However, previous neural OpenIE models under-explore the useful
syntactic information. In this paper, we model both constituency and dependency
trees into word-level graphs, and enable neural OpenIE to learn from the
syntactic structures. To better fuse heterogeneous information from both
graphs, we adopt multi-view learning to capture multiple relationships from
them. Finally, the finetuned constituency and dependency representations are
aggregated with sentential semantic representations for tuple generation.
Experiments show that both constituency and dependency information, and the
multi-view learning are effective.
Related papers
- Learning Representations without Compositional Assumptions [79.12273403390311]
We propose a data-driven approach that learns feature set dependencies by representing feature sets as graph nodes and their relationships as learnable edges.
We also introduce LEGATO, a novel hierarchical graph autoencoder that learns a smaller, latent graph to aggregate information from multiple views dynamically.
arXiv Detail & Related papers (2023-05-31T10:36:10Z) - ImPaKT: A Dataset for Open-Schema Knowledge Base Construction [10.073210304061966]
ImPaKT is a dataset for open-schema information extraction consisting of around 2500 text snippets from the C4 corpus, in the shopping domain (product buying guides)
We evaluate the power of this approach by fine-tuning the open source UL2 language model on a subset of the dataset, extracting a set of implication relations from a corpus of product buying guides, and conducting human evaluations of the resulting predictions.
arXiv Detail & Related papers (2022-12-21T05:02:49Z) - Enriching Relation Extraction with OpenIE [70.52564277675056]
Relation extraction (RE) is a sub-discipline of information extraction (IE)
In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE.
Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models.
arXiv Detail & Related papers (2022-12-19T11:26:23Z) - Modeling Multi-Granularity Hierarchical Features for Relation Extraction [26.852869800344813]
We propose a novel method to extract multi-granularity features based solely on the original input sentences.
We show that effective structured features can be attained even without external knowledge.
arXiv Detail & Related papers (2022-04-09T09:44:05Z) - Multi-Scale Feature and Metric Learning for Relation Extraction [15.045539169021092]
We propose a multi-scale feature and metric learning framework for relation extraction.
Specifically, we first develop a multi-scale convolutional neural network to aggregate the non-successive mainstays in the lexical sequence.
We also design a multi-scale graph convolutional network which can increase the receptive field towards specific syntactic roles.
arXiv Detail & Related papers (2021-07-28T15:14:36Z) - Syntactic and Semantic-driven Learning for Open Information Extraction [42.65591370263333]
One of the biggest bottlenecks in building accurate, high coverage neural open IE systems is the need for large labelled corpora.
We propose a syntactic and semantic-driven learning approach, which can learn neural open IE models without any human-labelled data.
arXiv Detail & Related papers (2021-03-05T02:59:40Z) - Learning the Implicit Semantic Representation on Graph-Structured Data [57.670106959061634]
Existing representation learning methods in graph convolutional networks are mainly designed by describing the neighborhood of each node as a perceptual whole.
We propose a Semantic Graph Convolutional Networks (SGCN) that explores the implicit semantics by learning latent semantic-paths in graphs.
arXiv Detail & Related papers (2021-01-16T16:18:43Z) - Multilingual Irony Detection with Dependency Syntax and Neural Models [61.32653485523036]
It focuses on the contribution from syntactic knowledge, exploiting linguistic resources where syntax is annotated according to the Universal Dependencies scheme.
The results suggest that fine-grained dependency-based syntactic information is informative for the detection of irony.
arXiv Detail & Related papers (2020-11-11T11:22:05Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - Canonicalizing Open Knowledge Bases with Multi-Layered Meta-Graph Neural
Network [43.48148444558244]
Noun phrases and relational phrases in Open Knowledge Bases are often not canonical, leading to redundant and ambiguous facts.
In this work, we integrate structural information (from which, which sentence) and semantic information (semantic similarity) to do the canonicalization.
We propose a graph neural network model to aggregate representations of noun phrases and relational phrases through the multi-layered metagraph structure.
arXiv Detail & Related papers (2020-06-17T02:32:36Z) - Heterogeneous Graph Neural Networks for Extractive Document
Summarization [101.17980994606836]
Cross-sentence relations are a crucial step in extractive document summarization.
We present a graph-based neural network for extractive summarization (HeterSumGraph)
We introduce different types of nodes into graph-based neural networks for extractive document summarization.
arXiv Detail & Related papers (2020-04-26T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.