Knowledge Graph Curation: A Practical Framework
- URL: http://arxiv.org/abs/2208.08130v1
- Date: Wed, 17 Aug 2022 07:55:28 GMT
- Title: Knowledge Graph Curation: A Practical Framework
- Authors: Elwin Huaman and Dieter Fensel
- Abstract summary: We propose a practical knowledge graph curation framework for improving the quality of KGs.
First, we define a set of quality metrics for assessing the status of KGs.
Second, we describe the verification and validation of KGs as cleaning tasks.
Third, we present duplicate detection and knowledge fusion strategies for enriching KGs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Knowledge Graphs (KGs) have shown to be very important for applications such
as personal assistants, question-answering systems, and search engines.
Therefore, it is crucial to ensure their high quality. However, KGs inevitably
contain errors, duplicates, and missing values, which may hinder their adoption
and utility in business applications, as they are not curated, e.g.,
low-quality KGs produce low-quality applications that are built on top of them.
In this vision paper, we propose a practical knowledge graph curation framework
for improving the quality of KGs. First, we define a set of quality metrics for
assessing the status of KGs, Second, we describe the verification and
validation of KGs as cleaning tasks, Third, we present duplicate detection and
knowledge fusion strategies for enriching KGs. Furthermore, we give insights
and directions toward a better architecture for curating KGs.
Related papers
- Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency [59.6772484292295]
Knowledge graphs (KGs) generated by large language models (LLMs) are increasingly valuable for Retrieval-Augmented Generation (RAG) applications.
Existing KG extraction methods rely on prompt-based approaches, which are inefficient for processing large-scale corpora.
We propose SynthKG, a multi-step, document-level synthesis KG workflow based on LLMs.
We also design a novel graph-based retrieval framework for RAG.
arXiv Detail & Related papers (2024-10-22T00:47:54Z) - Context Graph [8.02985792541121]
We present a context graph reasoning textbfCGR$3$ paradigm that leverages large language models (LLMs) to retrieve candidate entities and related contexts.
Our experimental results demonstrate that CGR$3$ significantly improves performance on KG completion (KGC) and KG question answering (KGQA) tasks.
arXiv Detail & Related papers (2024-06-17T02:59:19Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - Construction of Knowledge Graphs: State and Challenges [2.245333517888782]
We discuss the main graph models for knowledge graphs (KGs) and introduce the major requirement for future KG construction pipelines.
Next, we provide an overview of the necessary steps to build high-quality KGs, including cross-cutting topics such as metadata management.
We evaluate the state of the art of KG construction w.r.t the introduced requirements for specific popular KGs as well as some recent tools and strategies for KG construction.
arXiv Detail & Related papers (2023-02-22T17:26:03Z) - Knowledge Graph Quality Evaluation under Incomplete Information [9.48089663504665]
We propose a knowledge graph quality evaluation framework under incomplete information (QEII)
The quality evaluation task is transformed into an adversarial Q&A game between two KGs.
During the evaluation process, no raw data is exposed, which ensures information protection.
arXiv Detail & Related papers (2022-12-02T06:12:10Z) - Collaborative Knowledge Graph Fusion by Exploiting the Open Corpus [59.20235923987045]
It is challenging to enrich a Knowledge Graph with newly harvested triples while maintaining the quality of the knowledge representation.
This paper proposes a system to refine a KG using information harvested from an additional corpus.
arXiv Detail & Related papers (2022-06-15T12:16:10Z) - Identify, Align, and Integrate: Matching Knowledge Graphs to Commonsense
Reasoning Tasks [81.03233931066009]
It is critical to select a knowledge graph (KG) that is well-aligned with the given task's objective.
We show an approach to assess how well a candidate KG can correctly identify and accurately fill in gaps of reasoning for a task.
We show this KG-to-task match in 3 phases: knowledge-task identification, knowledge-task alignment, and knowledge-task integration.
arXiv Detail & Related papers (2021-04-20T18:23:45Z) - QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question
Answering [122.84513233992422]
We propose a new model, QA-GNN, which addresses the problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs)
We show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning.
arXiv Detail & Related papers (2021-04-13T17:32:51Z) - Learning to Deceive Knowledge Graph Augmented Models via Targeted
Perturbation [42.407209719347286]
Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks.
We show that, through a reinforcement learning policy, one can produce deceptively perturbed KGs.
Our findings raise doubts about KG-augmented models' ability to reason about KG information and give sensible explanations.
arXiv Detail & Related papers (2020-10-24T11:04:45Z) - IterefinE: Iterative KG Refinement Embeddings using Symbolic Knowledge [10.689559910656474]
Knowledge Graphs (KGs) extracted from text sources are often noisy and lead to poor performance in downstream application tasks such as KG-based question answering.
Most successful techniques for KG refinement make use of inference rules and reasoning oversupervised.
In this paper, we present a KG refinement framework called IterefinE which iteratively combines the two techniques.
arXiv Detail & Related papers (2020-06-03T14:05:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.