IterefinE: Iterative KG Refinement Embeddings using Symbolic Knowledge
- URL: http://arxiv.org/abs/2006.04509v1
- Date: Wed, 3 Jun 2020 14:05:54 GMT
- Title: IterefinE: Iterative KG Refinement Embeddings using Symbolic Knowledge
- Authors: Siddhant Arora, Srikanta Bedathur, Maya Ramanath, Deepak Sharma
- Abstract summary: Knowledge Graphs (KGs) extracted from text sources are often noisy and lead to poor performance in downstream application tasks such as KG-based question answering.
Most successful techniques for KG refinement make use of inference rules and reasoning oversupervised.
In this paper, we present a KG refinement framework called IterefinE which iteratively combines the two techniques.
- Score: 10.689559910656474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graphs (KGs) extracted from text sources are often noisy and lead
to poor performance in downstream application tasks such as KG-based question
answering.While much of the recent activity is focused on addressing the
sparsity of KGs by using embeddings for inferring new facts, the issue of
cleaning up of noise in KGs through KG refinement task is not as actively
studied. Most successful techniques for KG refinement make use of inference
rules and reasoning over ontologies. Barring a few exceptions, embeddings do
not make use of ontological information, and their performance in KG refinement
task is not well understood. In this paper, we present a KG refinement
framework called IterefinE which iteratively combines the two techniques - one
which uses ontological information and inferences rules, PSL-KGI, and the KG
embeddings such as ComplEx and ConvE which do not. As a result, IterefinE is
able to exploit not only the ontological information to improve the quality of
predictions, but also the power of KG embeddings which (implicitly) perform
longer chains of reasoning. The IterefinE framework, operates in a co-training
mode and results in explicit type-supervised embedding of the refined KG from
PSL-KGI which we call as TypeE-X. Our experiments over a range of KG benchmarks
show that the embeddings that we produce are able to reject noisy facts from KG
and at the same time infer higher quality new facts resulting in up to 9%
improvement of overall weighted F1 score
Related papers
- Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency [59.6772484292295]
Knowledge graphs (KGs) generated by large language models (LLMs) are increasingly valuable for Retrieval-Augmented Generation (RAG) applications.
Existing KG extraction methods rely on prompt-based approaches, which are inefficient for processing large-scale corpora.
We propose SynthKG, a multi-step, document-level synthesis KG workflow based on LLMs.
We also design a novel graph-based retrieval framework for RAG.
arXiv Detail & Related papers (2024-10-22T00:47:54Z) - Graph-constrained Reasoning: Faithful Reasoning on Knowledge Graphs with Large Language Models [83.28737898989694]
Large language models (LLMs) struggle with faithful reasoning due to knowledge gaps and hallucinations.
We introduce graph-constrained reasoning (GCR), a novel framework that bridges structured knowledge in KGs with unstructured reasoning in LLMs.
GCR achieves state-of-the-art performance and exhibits strong zero-shot generalizability to unseen KGs without additional training.
arXiv Detail & Related papers (2024-10-16T22:55:17Z) - Learning Rules from KGs Guided by Language Models [48.858741745144044]
Rule learning methods can be applied to predict potentially missing facts.
Ranking of rules is especially challenging over highly incomplete or biased KGs.
With the recent rise of Language Models (LMs) several works have claimed that LMs can be used as alternative means for KG completion.
arXiv Detail & Related papers (2024-09-12T09:27:36Z) - KG-GPT: A General Framework for Reasoning on Knowledge Graphs Using
Large Language Models [18.20425100517317]
We propose KG-GPT, a framework leveraging large language models for tasks employing knowledge graphs.
KG-GPT comprises three steps: Sentence, Graph Retrieval, and Inference, each aimed at partitioning sentences, retrieving relevant graph components, and deriving logical conclusions.
We evaluate KG-GPT using KG-based fact verification and KGQA benchmarks, with the model showing competitive and robust performance, even outperforming several fully-supervised models.
arXiv Detail & Related papers (2023-10-17T12:51:35Z) - Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for
Knowledge Graph Question Answering [16.434098552925427]
We study the KG-augmented language model approach for solving the knowledge graph question answering (KGQA) task.
We propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements.
arXiv Detail & Related papers (2023-09-20T10:42:08Z) - Knowledge Graph Curation: A Practical Framework [0.0]
We propose a practical knowledge graph curation framework for improving the quality of KGs.
First, we define a set of quality metrics for assessing the status of KGs.
Second, we describe the verification and validation of KGs as cleaning tasks.
Third, we present duplicate detection and knowledge fusion strategies for enriching KGs.
arXiv Detail & Related papers (2022-08-17T07:55:28Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z) - Collaborative Knowledge Graph Fusion by Exploiting the Open Corpus [59.20235923987045]
It is challenging to enrich a Knowledge Graph with newly harvested triples while maintaining the quality of the knowledge representation.
This paper proposes a system to refine a KG using information harvested from an additional corpus.
arXiv Detail & Related papers (2022-06-15T12:16:10Z) - Learning to Deceive Knowledge Graph Augmented Models via Targeted
Perturbation [42.407209719347286]
Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks.
We show that, through a reinforcement learning policy, one can produce deceptively perturbed KGs.
Our findings raise doubts about KG-augmented models' ability to reason about KG information and give sensible explanations.
arXiv Detail & Related papers (2020-10-24T11:04:45Z) - Multilingual Knowledge Graph Completion via Ensemble Knowledge Transfer [43.453915033312114]
Predicting missing facts in a knowledge graph (KG) is a crucial task in knowledge base construction and reasoning.
We propose KEnS, a novel framework for embedding learning and ensemble knowledge transfer across a number of language-specific KGs.
Experiments on five real-world language-specific KGs show that KEnS consistently improves state-of-the-art methods on KG completion.
arXiv Detail & Related papers (2020-10-07T04:54:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.