Knowledge Graphs Querying
- URL: http://arxiv.org/abs/2305.14485v1
- Date: Tue, 23 May 2023 19:32:42 GMT
- Title: Knowledge Graphs Querying
- Authors: Arijit Khan
- Abstract summary: We aim at uniting different interdisciplinary topics and concepts that have been developed for KG querying.
Recent advances on KG and query embedding, multimodal KG, and KG-QA come from deep learning, IR, NLP, and computer vision domains.
- Score: 4.548471481431569
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge graphs (KGs) such as DBpedia, Freebase, YAGO, Wikidata, and NELL
were constructed to store large-scale, real-world facts as (subject, predicate,
object) triples -- that can also be modeled as a graph, where a node (a subject
or an object) represents an entity with attributes, and a directed edge (a
predicate) is a relationship between two entities. Querying KGs is critical in
web search, question answering (QA), semantic search, personal assistants, fact
checking, and recommendation. While significant progress has been made on KG
construction and curation, thanks to deep learning recently we have seen a
surge of research on KG querying and QA. The objectives of our survey are
two-fold. First, research on KG querying has been conducted by several
communities, such as databases, data mining, semantic web, machine learning,
information retrieval, and natural language processing (NLP), with different
focus and terminologies; and also in diverse topics ranging from graph
databases, query languages, join algorithms, graph patterns matching, to more
sophisticated KG embedding and natural language questions (NLQs). We aim at
uniting different interdisciplinary topics and concepts that have been
developed for KG querying. Second, many recent advances on KG and query
embedding, multimodal KG, and KG-QA come from deep learning, IR, NLP, and
computer vision domains. We identify important challenges of KG querying that
received less attention by graph databases, and by the DB community in general,
e.g., incomplete KG, semantic matching, multimodal data, and NLQs. We conclude
by discussing interesting opportunities for the data management community, for
instance, KG as a unified data model and vector-based query processing.
Related papers
- KG-TRICK: Unifying Textual and Relational Information Completion of Knowledge for Multilingual Knowledge Graphs [17.447946165398932]
We introduce KG-TRICK, a novel sequence-to-sequence framework that unifies the tasks of textual and relational information completion for multilingual knowledge graphs.
KG-TRICK demonstrates that: i) it is possible to unify the tasks of KGC and KGE into a single framework, and ii) combining textual information from multiple languages is beneficial to improve the completeness of a KG.
As part of our contributions, we also introduce WikiKGE10++, the largest manually-curated benchmark for textual information completion of KGs.
arXiv Detail & Related papers (2025-01-07T06:21:40Z) - Ontology-grounded Automatic Knowledge Graph Construction by LLM under Wikidata schema [60.42231674887294]
We propose an ontology-grounded approach to Knowledge Graph (KG) construction using Large Language Models (LLMs) on a knowledge base.
We ground generation of KG with the authored ontology based on extracted relations to ensure consistency and interpretability.
Our work presents a promising direction for scalable KG construction pipeline with minimal human intervention, that yields high quality and human-interpretable KGs.
arXiv Detail & Related papers (2024-12-30T13:36:05Z) - A Prompt-Based Knowledge Graph Foundation Model for Universal In-Context Reasoning [17.676185326247946]
We propose a prompt-based KG foundation model via in-context learning, namely KG-ICL, to achieve a universal reasoning ability.
To encode prompt graphs with the generalization ability to unseen entities and relations in queries, we first propose a unified tokenizer.
Then, we propose two message passing neural networks to perform prompt encoding and KG reasoning, respectively.
arXiv Detail & Related papers (2024-10-16T06:47:18Z) - Retrieval-Augmented Language Model for Extreme Multi-Label Knowledge Graph Link Prediction [2.6749568255705656]
Extrapolation in large language models (LLMs) for open-ended inquiry encounters two pivotal issues.
Existing works attempt to tackle the problem by augmenting the input of a smaller language model with information from a knowledge graph.
We propose a new task, the extreme multi-label KG link prediction task, to enable a model to perform extrapolation with multiple responses.
arXiv Detail & Related papers (2024-05-21T10:10:56Z) - Multi-hop Question Answering over Knowledge Graphs using Large Language Models [1.8130068086063336]
We evaluate the capability of (LLMs) to answer questions over Knowledge graphs that involve multiple hops.
We show that depending upon the size and nature of the KG we need different approaches to extract and feed the relevant information to an LLM.
arXiv Detail & Related papers (2024-04-30T03:31:03Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - Deep Bidirectional Language-Knowledge Graph Pretraining [159.9645181522436]
DRAGON is a self-supervised approach to pretraining a deeply joint language-knowledge foundation model from text and KG at scale.
Our model takes pairs of text segments and relevant KG subgraphs as input and bidirectionally fuses information from both modalities.
arXiv Detail & Related papers (2022-10-17T18:02:52Z) - Reasoning over Multi-view Knowledge Graphs [59.99051368907095]
ROMA is a novel framework for answering logical queries over multi-view KGs.
It scales up to KGs of large sizes (e.g., millions of facts) and fine-granular views.
It generalizes to query structures and KG views that are unobserved during training.
arXiv Detail & Related papers (2022-09-27T21:32:20Z) - QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question
Answering [122.84513233992422]
We propose a new model, QA-GNN, which addresses the problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs)
We show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning.
arXiv Detail & Related papers (2021-04-13T17:32:51Z) - Knowledge Graphs and Knowledge Networks: The Story in Brief [0.1933681537640272]
Knowledge Graphs (KGs) represent real-world noisy raw information in a structured form, capturing relationships between entities.
For dynamic real-world applications such as social networks, recommender systems, computational biology, relational knowledge representation has emerged as a challenging research problem.
This article attempts to summarize the journey of KG for AI.
arXiv Detail & Related papers (2020-03-07T18:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.