Canonicalizing Open Knowledge Bases with Multi-Layered Meta-Graph Neural
Network
- URL: http://arxiv.org/abs/2006.09610v1
- Date: Wed, 17 Jun 2020 02:32:36 GMT
- Title: Canonicalizing Open Knowledge Bases with Multi-Layered Meta-Graph Neural
Network
- Authors: Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V. Chawla, Meng
Jiang
- Abstract summary: Noun phrases and relational phrases in Open Knowledge Bases are often not canonical, leading to redundant and ambiguous facts.
In this work, we integrate structural information (from which, which sentence) and semantic information (semantic similarity) to do the canonicalization.
We propose a graph neural network model to aggregate representations of noun phrases and relational phrases through the multi-layered metagraph structure.
- Score: 43.48148444558244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Noun phrases and relational phrases in Open Knowledge Bases are often not
canonical, leading to redundant and ambiguous facts. In this work, we integrate
structural information (from which tuple, which sentence) and semantic
information (semantic similarity) to do the canonicalization. We represent the
two types of information as a multi-layered graph: the structural information
forms the links across the sentence, relational phrase, and noun phrase layers;
the semantic information forms weighted intra-layer links for each layer. We
propose a graph neural network model to aggregate the representations of noun
phrases and relational phrases through the multi-layered meta-graph structure.
Experiments show that our model outperforms existing approaches on a public
datasets in general domain.
Related papers
- Conversational Semantic Parsing using Dynamic Context Graphs [68.72121830563906]
We consider the task of conversational semantic parsing over general purpose knowledge graphs (KGs) with millions of entities, and thousands of relation-types.
We focus on models which are capable of interactively mapping user utterances into executable logical forms.
arXiv Detail & Related papers (2023-05-04T16:04:41Z) - Variational Cross-Graph Reasoning and Adaptive Structured Semantics
Learning for Compositional Temporal Grounding [143.5927158318524]
Temporal grounding is the task of locating a specific segment from an untrimmed video according to a query sentence.
We introduce a new Compositional Temporal Grounding task and construct two new dataset splits.
We argue that the inherent structured semantics inside the videos and language is the crucial factor to achieve compositional generalization.
arXiv Detail & Related papers (2023-01-22T08:02:23Z) - Syntactic Multi-view Learning for Open Information Extraction [26.1066324477346]
Open Information Extraction (OpenIE) aims to extracts from open-domain sentences.
In this paper, we model both constituency and dependency trees into word-level graphs.
arXiv Detail & Related papers (2022-12-05T07:15:41Z) - Sparse Structure Learning via Graph Neural Networks for Inductive
Document Classification [2.064612766965483]
We propose a novel GNN-based sparse structure learning model for inductive document classification.
Our model collects a set of trainable edges connecting disjoint words between sentences and employs structure learning to sparsely select edges with dynamic contextual dependencies.
Experiments on several real-world datasets demonstrate that the proposed model outperforms most state-of-the-art results.
arXiv Detail & Related papers (2021-12-13T02:36:04Z) - SHGNN: Structure-Aware Heterogeneous Graph Neural Network [77.78459918119536]
This paper proposes a novel Structure-Aware Heterogeneous Graph Neural Network (SHGNN) to address the above limitations.
We first utilize a feature propagation module to capture the local structure information of intermediate nodes in the meta-path.
Next, we use a tree-attention aggregator to incorporate the graph structure information into the aggregation module on the meta-path.
Finally, we leverage a meta-path aggregator to fuse the information aggregated from different meta-paths.
arXiv Detail & Related papers (2021-12-12T14:18:18Z) - Linguistic Inspired Graph Analysis [0.0]
Isomorphisms allow human cognition to transcribe a potentially unsolvable problem from one domain to a different domain.
Current approaches only focus on transcribing structural information from the source to target structure.
It is found that further work needs to be done to understand how graphs can be enriched to allow for isomorphisms to capture semantic and pragmatic information.
arXiv Detail & Related papers (2021-05-13T12:16:30Z) - GraphFormers: GNN-nested Transformers for Representation Learning on
Textual Graph [53.70520466556453]
We propose GraphFormers, where layerwise GNN components are nested alongside the transformer blocks of language models.
With the proposed architecture, the text encoding and the graph aggregation are fused into an iterative workflow.
In addition, a progressive learning strategy is introduced, where the model is successively trained on manipulated data and original data to reinforce its capability of integrating information on graph.
arXiv Detail & Related papers (2021-05-06T12:20:41Z) - Learning the Implicit Semantic Representation on Graph-Structured Data [57.670106959061634]
Existing representation learning methods in graph convolutional networks are mainly designed by describing the neighborhood of each node as a perceptual whole.
We propose a Semantic Graph Convolutional Networks (SGCN) that explores the implicit semantics by learning latent semantic-paths in graphs.
arXiv Detail & Related papers (2021-01-16T16:18:43Z) - Joint Semantic Analysis with Document-Level Cross-Task Coherence Rewards [13.753240692520098]
We present a neural network architecture for joint coreference resolution and semantic role labeling for English.
We use reinforcement learning to encourage global coherence over the document and between semantic annotations.
This leads to improvements on both tasks in multiple datasets from different domains.
arXiv Detail & Related papers (2020-10-12T09:36:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.