Towards a GML-Enabled Knowledge Graph Platform
- URL: http://arxiv.org/abs/2303.02166v1
- Date: Fri, 3 Mar 2023 17:41:11 GMT
- Title: Towards a GML-Enabled Knowledge Graph Platform
- Authors: Hussein Abdallah, Essam Mansour
- Abstract summary: This vision paper proposes KGNet, an on-demand graph machine learning (GML) as a service on top of RDF engines.
KGNet automates the training of GML models on a KG by identifying a task-specific subgraph.
All trained models are accessible via a SPARQL-like query.
- Score: 0.5904265865319825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This vision paper proposes KGNet, an on-demand graph machine learning (GML)
as a service on top of RDF engines to support GML-enabled SPARQL queries. KGNet
automates the training of GML models on a KG by identifying a task-specific
subgraph. This helps reduce the task-irrelevant KG structure and properties for
better scalability and accuracy. While training a GML model on KG, KGNet
collects metadata of trained models in the form of an RDF graph called KGMeta,
which is interlinked with the relevant subgraphs in KG. Finally, all trained
models are accessible via a SPARQL-like query. We call it a GML-enabled query
and refer to it as SPARQLML. KGNet supports SPARQLML on top of existing RDF
engines as an interface for querying and inferencing over KGs using GML models.
The development of KGNet poses research opportunities in several areas,
including meta-sampling for identifying task-specific subgraphs, GML pipeline
automation with computational constraints, such as limited time and memory
budget, and SPARQLML query optimization. KGNet supports different GML tasks,
such as node classification, link prediction, and semantic entity matching. We
evaluated KGNet using two real KGs of different application domains. Compared
to training on the entire KG, KGNet significantly reduced training time and
memory usage while maintaining comparable or improved accuracy. The KGNet
source-code is available for further study
Related papers
- Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [90.30473970040362]
We propose a training-free method called Generate-on-Graph (GoG) that can generate new factual triples while exploring on Knowledge Graphs (KGs)
Specifically, we propose a selecting-generating-answering framework, which not only treat the LLM as an agent to explore on KGs, but also treat it as a KG to generate new facts based on the explored subgraph.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - Task-Oriented GNNs Training on Large Knowledge Graphs for Accurate and Efficient Modeling [5.460112864687281]
This paper proposes KG-TOSA, an approach to automate the TOSG extraction for task-oriented HGNN training on a large Knowledge Graph (KG)
KG-TOSA helps state-of-the-art HGNN methods reduce training time and memory usage by up to 70% while improving the model performance, e.g., accuracy and inference time.
arXiv Detail & Related papers (2024-03-09T01:17:26Z) - KG-Agent: An Efficient Autonomous Agent Framework for Complex Reasoning
over Knowledge Graph [134.8631016845467]
We propose an autonomous LLM-based agent framework, called KG-Agent.
In KG-Agent, we integrate the LLM, multifunctional toolbox, KG-based executor, and knowledge memory.
To guarantee the effectiveness, we leverage program language to formulate the multi-hop reasoning process over the KG.
arXiv Detail & Related papers (2024-02-17T02:07:49Z) - ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained
Language Models for Question Answering over Knowledge Graph [142.42275983201978]
We propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning.
We also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions.
Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data.
arXiv Detail & Related papers (2023-12-30T07:18:54Z) - KG-GPT: A General Framework for Reasoning on Knowledge Graphs Using
Large Language Models [18.20425100517317]
We propose KG-GPT, a framework leveraging large language models for tasks employing knowledge graphs.
KG-GPT comprises three steps: Sentence, Graph Retrieval, and Inference, each aimed at partitioning sentences, retrieving relevant graph components, and deriving logical conclusions.
We evaluate KG-GPT using KG-based fact verification and KGQA benchmarks, with the model showing competitive and robust performance, even outperforming several fully-supervised models.
arXiv Detail & Related papers (2023-10-17T12:51:35Z) - PyGraft: Configurable Generation of Synthetic Schemas and Knowledge
Graphs at Your Fingertips [3.5923669681271257]
PyGraft is a Python-based tool that generates customized, domain-agnostic schemas and KGs.
We aim to empower the generation of a more diverse array of KGs for benchmarking novel approaches in areas such as graph-based machine learning (ML)
In ML, this should foster a more holistic evaluation of model performance and generalization capability, thereby going beyond the limited collection of available benchmarks.
arXiv Detail & Related papers (2023-09-07T13:00:09Z) - A Universal Question-Answering Platform for Knowledge Graphs [7.2676028986202]
We propose KGQAn, a universal QA system that does not need to be tailored to each target KG.
KGQAn is easily deployed and outperforms by a large margin the state-of-the-art in terms of quality of answers and processing time.
arXiv Detail & Related papers (2023-03-01T15:35:32Z) - KGxBoard: Explainable and Interactive Leaderboard for Evaluation of
Knowledge Graph Completion Models [76.01814380927507]
KGxBoard is an interactive framework for performing fine-grained evaluation on meaningful subsets of the data.
In our experiments, we highlight the findings with the use of KGxBoard, which would have been impossible to detect with standard averaged single-score metrics.
arXiv Detail & Related papers (2022-08-23T15:11:45Z) - MEKER: Memory Efficient Knowledge Embedding Representation for Link
Prediction and Question Answering [65.62309538202771]
Knowledge Graphs (KGs) are symbolically structured storages of facts.
KG embedding contains concise data used in NLP tasks requiring implicit information about the real world.
We propose a memory-efficient KG embedding model, which yields SOTA-comparable performance on link prediction tasks and KG-based Question Answering.
arXiv Detail & Related papers (2022-04-22T10:47:03Z) - Sequence-to-Sequence Knowledge Graph Completion and Question Answering [8.207403859762044]
We show that an off-the-shelf encoder-decoder Transformer model can serve as a scalable and versatile KGE model.
We achieve this by posing KG link prediction as a sequence-to-sequence task and exchange the triple scoring approach taken by prior KGE methods with autoregressive decoding.
arXiv Detail & Related papers (2022-03-19T13:01:49Z) - Toward Subgraph-Guided Knowledge Graph Question Generation with Graph
Neural Networks [53.58077686470096]
Knowledge graph (KG) question generation (QG) aims to generate natural language questions from KGs and target answers.
In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.
arXiv Detail & Related papers (2020-04-13T15:43:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.