Reasoning over Multi-view Knowledge Graphs
- URL: http://arxiv.org/abs/2209.13702v1
- Date: Tue, 27 Sep 2022 21:32:20 GMT
- Title: Reasoning over Multi-view Knowledge Graphs
- Authors: Zhaohan Xi, Ren Pang, Changjiang Li, Tianyu Du, Shouling Ji, Fenglong
Ma, Ting Wang
- Abstract summary: ROMA is a novel framework for answering logical queries over multi-view KGs.
It scales up to KGs of large sizes (e.g., millions of facts) and fine-granular views.
It generalizes to query structures and KG views that are unobserved during training.
- Score: 59.99051368907095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, knowledge representation learning (KRL) is emerging as the
state-of-the-art approach to process queries over knowledge graphs (KGs),
wherein KG entities and the query are embedded into a latent space such that
entities that answer the query are embedded close to the query. Yet, despite
the intensive research on KRL, most existing studies either focus on homogenous
KGs or assume KG completion tasks (i.e., inference of missing facts), while
answering complex logical queries over KGs with multiple aspects (multi-view
KGs) remains an open challenge.
To bridge this gap, in this paper, we present ROMA, a novel KRL framework for
answering logical queries over multi-view KGs. Compared with the prior work,
ROMA departs in major aspects. (i) It models a multi-view KG as a set of
overlaying sub-KGs, each corresponding to one view, which subsumes many types
of KGs studied in the literature (e.g., temporal KGs). (ii) It supports complex
logical queries with varying relation and view constraints (e.g., with complex
topology and/or from multiple views); (iii) It scales up to KGs of large sizes
(e.g., millions of facts) and fine-granular views (e.g., dozens of views); (iv)
It generalizes to query structures and KG views that are unobserved during
training. Extensive empirical evaluation on real-world KGs shows that \system
significantly outperforms alternative methods.
Related papers
- Graph-constrained Reasoning: Faithful Reasoning on Knowledge Graphs with Large Language Models [83.28737898989694]
Large language models (LLMs) struggle with faithful reasoning due to knowledge gaps and hallucinations.
We introduce graph-constrained reasoning (GCR), a novel framework that bridges structured knowledge in KGs with unstructured reasoning in LLMs.
GCR achieves state-of-the-art performance and exhibits strong zero-shot generalizability to unseen KGs without additional training.
arXiv Detail & Related papers (2024-10-16T22:55:17Z) - A Prompt-Based Knowledge Graph Foundation Model for Universal In-Context Reasoning [17.676185326247946]
We propose a prompt-based KG foundation model via in-context learning, namely KG-ICL, to achieve a universal reasoning ability.
To encode prompt graphs with the generalization ability to unseen entities and relations in queries, we first propose a unified tokenizer.
Then, we propose two message passing neural networks to perform prompt encoding and KG reasoning, respectively.
arXiv Detail & Related papers (2024-10-16T06:47:18Z) - Context Graph [8.02985792541121]
We present a context graph reasoning textbfCGR$3$ paradigm that leverages large language models (LLMs) to retrieve candidate entities and related contexts.
Our experimental results demonstrate that CGR$3$ significantly improves performance on KG completion (KGC) and KG question answering (KGQA) tasks.
arXiv Detail & Related papers (2024-06-17T02:59:19Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey [61.8716670402084]
This survey focuses on KG-aware research in two principal aspects: KG-driven Multi-Modal (KG4MM) learning, and Multi-Modal Knowledge Graph (MM4KG)
Our review includes two primary task categories: KG-aware multi-modal learning tasks, and intrinsic MMKG tasks.
For most of these tasks, we provide definitions, evaluation benchmarks, and additionally outline essential insights for conducting relevant research.
arXiv Detail & Related papers (2024-02-08T04:04:36Z) - Knowledge Graphs Querying [4.548471481431569]
We aim at uniting different interdisciplinary topics and concepts that have been developed for KG querying.
Recent advances on KG and query embedding, multimodal KG, and KG-QA come from deep learning, IR, NLP, and computer vision domains.
arXiv Detail & Related papers (2023-05-23T19:32:42Z) - Logical Message Passing Networks with One-hop Inference on Atomic
Formulas [57.47174363091452]
We propose a framework for complex query answering that decomposes the Knowledge Graph embeddings from neural set operators.
On top of the query graph, we propose the Logical Message Passing Neural Network (LMPNN) that connects the local one-hop inferences on atomic formulas to the global logical reasoning.
Our approach yields the new state-of-the-art neural CQA model.
arXiv Detail & Related papers (2023-01-21T02:34:06Z) - A Survey On Few-shot Knowledge Graph Completion with Structural and
Commonsense Knowledge [3.4012007729454807]
Few-shot KG completion (FKGC) requires the strengths of graph representation learning and few-shot learning.
This paper introduces FKGC challenges, commonly used KGs, and CKGs.
We then systematically categorize and summarize existing works in terms of the type of KGs and the methods.
arXiv Detail & Related papers (2023-01-03T16:00:09Z) - Multilingual Knowledge Graph Completion via Ensemble Knowledge Transfer [43.453915033312114]
Predicting missing facts in a knowledge graph (KG) is a crucial task in knowledge base construction and reasoning.
We propose KEnS, a novel framework for embedding learning and ensemble knowledge transfer across a number of language-specific KGs.
Experiments on five real-world language-specific KGs show that KEnS consistently improves state-of-the-art methods on KG completion.
arXiv Detail & Related papers (2020-10-07T04:54:03Z) - KACC: A Multi-task Benchmark for Knowledge Abstraction, Concretization
and Completion [99.47414073164656]
A comprehensive knowledge graph (KG) contains an instance-level entity graph and an ontology-level concept graph.
The two-view KG provides a testbed for models to "simulate" human's abilities on knowledge abstraction, concretization, and completion.
We propose a unified KG benchmark by improving existing benchmarks in terms of dataset scale, task coverage, and difficulty.
arXiv Detail & Related papers (2020-04-28T16:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.