SalKG: Learning From Knowledge Graph Explanations for Commonsense
Reasoning
- URL: http://arxiv.org/abs/2104.08793v1
- Date: Sun, 18 Apr 2021 09:59:46 GMT
- Title: SalKG: Learning From Knowledge Graph Explanations for Commonsense
Reasoning
- Authors: Aaron Chan, Soumya Sanyal, Boyuan Long, Jiashu Xu, Tanishq Gupta,
Xiang Ren
- Abstract summary: Augmenting language models with knowledge graphs (KGs) has achieved success on various commonsense reasoning tasks.
We propose SalKG, a framework for learning from KG explanations of both coarse (Is the KG salient?) and fine (Which parts of the KG are salient?)
We find that SalKG's training process can consistently improve model performance.
- Score: 29.148731802458983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Augmenting pre-trained language models with knowledge graphs (KGs) has
achieved success on various commonsense reasoning tasks. Although some works
have attempted to explain the behavior of such KG-augmented models by
indicating which KG inputs are salient (i.e., important for the model's
prediction), it is not always clear how these explanations should be used to
make the model better. In this paper, we explore whether KG explanations can be
used as supervision for teaching these KG-augmented models how to filter out
unhelpful KG information. To this end, we propose SalKG, a simple framework for
learning from KG explanations of both coarse (Is the KG salient?) and fine
(Which parts of the KG are salient?) granularity. Given the explanations
generated from a task's training set, SalKG trains KG-augmented models to solve
the task by focusing on KG information highlighted by the explanations as
salient. Across two popular commonsense QA benchmarks and three KG-augmented
models, we find that SalKG's training process can consistently improve model
performance.
Related papers
- Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - Graph-constrained Reasoning: Faithful Reasoning on Knowledge Graphs with Large Language Models [83.28737898989694]
Large language models (LLMs) struggle with faithful reasoning due to knowledge gaps and hallucinations.
We introduce graph-constrained reasoning (GCR), a novel framework that bridges structured knowledge in KGs with unstructured reasoning in LLMs.
GCR achieves state-of-the-art performance and exhibits strong zero-shot generalizability to unseen KGs without additional training.
arXiv Detail & Related papers (2024-10-16T22:55:17Z) - Context Graph [8.02985792541121]
We present a context graph reasoning textbfCGR$3$ paradigm that leverages large language models (LLMs) to retrieve candidate entities and related contexts.
Our experimental results demonstrate that CGR$3$ significantly improves performance on KG completion (KGC) and KG question answering (KGQA) tasks.
arXiv Detail & Related papers (2024-06-17T02:59:19Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - Knowledge Graphs are not Created Equal: Exploring the Properties and
Structure of Real KGs [2.28438857884398]
We study 29 real knowledge graph datasets from diverse domains to analyze their properties and structural patterns.
We believe that the rich structural information contained in KGs can benefit the development of better KG models across fields.
arXiv Detail & Related papers (2023-11-10T22:18:09Z) - Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for
Knowledge Graph Question Answering [16.434098552925427]
We study the KG-augmented language model approach for solving the knowledge graph question answering (KGQA) task.
We propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements.
arXiv Detail & Related papers (2023-09-20T10:42:08Z) - Identify, Align, and Integrate: Matching Knowledge Graphs to Commonsense
Reasoning Tasks [81.03233931066009]
It is critical to select a knowledge graph (KG) that is well-aligned with the given task's objective.
We show an approach to assess how well a candidate KG can correctly identify and accurately fill in gaps of reasoning for a task.
We show this KG-to-task match in 3 phases: knowledge-task identification, knowledge-task alignment, and knowledge-task integration.
arXiv Detail & Related papers (2021-04-20T18:23:45Z) - Learning to Deceive Knowledge Graph Augmented Models via Targeted
Perturbation [42.407209719347286]
Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks.
We show that, through a reinforcement learning policy, one can produce deceptively perturbed KGs.
Our findings raise doubts about KG-augmented models' ability to reason about KG information and give sensible explanations.
arXiv Detail & Related papers (2020-10-24T11:04:45Z) - Language Models are Open Knowledge Graphs [75.48081086368606]
Recent deep language models automatically acquire knowledge from large-scale corpora via pre-training.
In this paper, we propose an unsupervised method to cast the knowledge contained within language models into KGs.
We show that KGs are constructed with a single forward pass of the pre-trained language models (without fine-tuning) over the corpora.
arXiv Detail & Related papers (2020-10-22T18:01:56Z) - IterefinE: Iterative KG Refinement Embeddings using Symbolic Knowledge [10.689559910656474]
Knowledge Graphs (KGs) extracted from text sources are often noisy and lead to poor performance in downstream application tasks such as KG-based question answering.
Most successful techniques for KG refinement make use of inference rules and reasoning oversupervised.
In this paper, we present a KG refinement framework called IterefinE which iteratively combines the two techniques.
arXiv Detail & Related papers (2020-06-03T14:05:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.