IterefinE: Iterative KG Refinement Embeddings using Symbolic Knowledge
- URL: http://arxiv.org/abs/2006.04509v1
- Date: Wed, 3 Jun 2020 14:05:54 GMT
- Title: IterefinE: Iterative KG Refinement Embeddings using Symbolic Knowledge
- Authors: Siddhant Arora, Srikanta Bedathur, Maya Ramanath, Deepak Sharma
- Abstract summary: Knowledge Graphs (KGs) extracted from text sources are often noisy and lead to poor performance in downstream application tasks such as KG-based question answering.
Most successful techniques for KG refinement make use of inference rules and reasoning oversupervised.
In this paper, we present a KG refinement framework called IterefinE which iteratively combines the two techniques.
- Score: 10.689559910656474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graphs (KGs) extracted from text sources are often noisy and lead
to poor performance in downstream application tasks such as KG-based question
answering.While much of the recent activity is focused on addressing the
sparsity of KGs by using embeddings for inferring new facts, the issue of
cleaning up of noise in KGs through KG refinement task is not as actively
studied. Most successful techniques for KG refinement make use of inference
rules and reasoning over ontologies. Barring a few exceptions, embeddings do
not make use of ontological information, and their performance in KG refinement
task is not well understood. In this paper, we present a KG refinement
framework called IterefinE which iteratively combines the two techniques - one
which uses ontological information and inferences rules, PSL-KGI, and the KG
embeddings such as ComplEx and ConvE which do not. As a result, IterefinE is
able to exploit not only the ontological information to improve the quality of
predictions, but also the power of KG embeddings which (implicitly) perform
longer chains of reasoning. The IterefinE framework, operates in a co-training
mode and results in explicit type-supervised embedding of the refined KG from
PSL-KGI which we call as TypeE-X. Our experiments over a range of KG benchmarks
show that the embeddings that we produce are able to reject noisy facts from KG
and at the same time infer higher quality new facts resulting in up to 9%
improvement of overall weighted F1 score
Related papers
- KG-FIT: Knowledge Graph Fine-Tuning Upon Open-World Knowledge [63.19837262782962]
Knowledge Graph Embedding (KGE) techniques are crucial in learning compact representations of entities and relations within a knowledge graph.
This study introduces KG-FIT, which builds a semantically coherent hierarchical structure of entity clusters.
Experiments on the benchmark datasets FB15K-237, YAGO3-10, and PrimeKG demonstrate the superiority of KG-FIT over state-of-the-art pre-trained language model-based methods.
arXiv Detail & Related papers (2024-05-26T03:04:26Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [90.30473970040362]
We propose a training-free method called Generate-on-Graph (GoG) that can generate new factual triples while exploring on Knowledge Graphs (KGs)
Specifically, we propose a selecting-generating-answering framework, which not only treat the LLM as an agent to explore on KGs, but also treat it as a KG to generate new facts based on the explored subgraph.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - KG-GPT: A General Framework for Reasoning on Knowledge Graphs Using
Large Language Models [18.20425100517317]
We propose KG-GPT, a framework leveraging large language models for tasks employing knowledge graphs.
KG-GPT comprises three steps: Sentence, Graph Retrieval, and Inference, each aimed at partitioning sentences, retrieving relevant graph components, and deriving logical conclusions.
We evaluate KG-GPT using KG-based fact verification and KGQA benchmarks, with the model showing competitive and robust performance, even outperforming several fully-supervised models.
arXiv Detail & Related papers (2023-10-17T12:51:35Z) - Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for
Knowledge Graph Question Answering [16.434098552925427]
We study the KG-augmented language model approach for solving the knowledge graph question answering (KGQA) task.
We propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements.
arXiv Detail & Related papers (2023-09-20T10:42:08Z) - Knowledge Graph Curation: A Practical Framework [0.0]
We propose a practical knowledge graph curation framework for improving the quality of KGs.
First, we define a set of quality metrics for assessing the status of KGs.
Second, we describe the verification and validation of KGs as cleaning tasks.
Third, we present duplicate detection and knowledge fusion strategies for enriching KGs.
arXiv Detail & Related papers (2022-08-17T07:55:28Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z) - Collaborative Knowledge Graph Fusion by Exploiting the Open Corpus [59.20235923987045]
It is challenging to enrich a Knowledge Graph with newly harvested triples while maintaining the quality of the knowledge representation.
This paper proposes a system to refine a KG using information harvested from an additional corpus.
arXiv Detail & Related papers (2022-06-15T12:16:10Z) - MEKER: Memory Efficient Knowledge Embedding Representation for Link
Prediction and Question Answering [65.62309538202771]
Knowledge Graphs (KGs) are symbolically structured storages of facts.
KG embedding contains concise data used in NLP tasks requiring implicit information about the real world.
We propose a memory-efficient KG embedding model, which yields SOTA-comparable performance on link prediction tasks and KG-based Question Answering.
arXiv Detail & Related papers (2022-04-22T10:47:03Z) - Learning to Deceive Knowledge Graph Augmented Models via Targeted
Perturbation [42.407209719347286]
Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks.
We show that, through a reinforcement learning policy, one can produce deceptively perturbed KGs.
Our findings raise doubts about KG-augmented models' ability to reason about KG information and give sensible explanations.
arXiv Detail & Related papers (2020-10-24T11:04:45Z) - Multilingual Knowledge Graph Completion via Ensemble Knowledge Transfer [43.453915033312114]
Predicting missing facts in a knowledge graph (KG) is a crucial task in knowledge base construction and reasoning.
We propose KEnS, a novel framework for embedding learning and ensemble knowledge transfer across a number of language-specific KGs.
Experiments on five real-world language-specific KGs show that KEnS consistently improves state-of-the-art methods on KG completion.
arXiv Detail & Related papers (2020-10-07T04:54:03Z) - Toward Subgraph-Guided Knowledge Graph Question Generation with Graph
Neural Networks [53.58077686470096]
Knowledge graph (KG) question generation (QG) aims to generate natural language questions from KGs and target answers.
In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.
arXiv Detail & Related papers (2020-04-13T15:43:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.