SalKG: Learning From Knowledge Graph Explanations for Commonsense
Reasoning
- URL: http://arxiv.org/abs/2104.08793v1
- Date: Sun, 18 Apr 2021 09:59:46 GMT
- Title: SalKG: Learning From Knowledge Graph Explanations for Commonsense
Reasoning
- Authors: Aaron Chan, Soumya Sanyal, Boyuan Long, Jiashu Xu, Tanishq Gupta,
Xiang Ren
- Abstract summary: Augmenting language models with knowledge graphs (KGs) has achieved success on various commonsense reasoning tasks.
We propose SalKG, a framework for learning from KG explanations of both coarse (Is the KG salient?) and fine (Which parts of the KG are salient?)
We find that SalKG's training process can consistently improve model performance.
- Score: 29.148731802458983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Augmenting pre-trained language models with knowledge graphs (KGs) has
achieved success on various commonsense reasoning tasks. Although some works
have attempted to explain the behavior of such KG-augmented models by
indicating which KG inputs are salient (i.e., important for the model's
prediction), it is not always clear how these explanations should be used to
make the model better. In this paper, we explore whether KG explanations can be
used as supervision for teaching these KG-augmented models how to filter out
unhelpful KG information. To this end, we propose SalKG, a simple framework for
learning from KG explanations of both coarse (Is the KG salient?) and fine
(Which parts of the KG are salient?) granularity. Given the explanations
generated from a task's training set, SalKG trains KG-augmented models to solve
the task by focusing on KG information highlighted by the explanations as
salient. Across two popular commonsense QA benchmarks and three KG-augmented
models, we find that SalKG's training process can consistently improve model
performance.
Related papers
- Context Graph [8.02985792541121]
We present a context graph reasoning textbfCGR$3$ paradigm that leverages large language models (LLMs) to retrieve candidate entities and related contexts.
Our experimental results demonstrate that CGR$3$ significantly improves performance on KG completion (KGC) and KG question answering (KGQA) tasks.
arXiv Detail & Related papers (2024-06-17T02:59:19Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [90.30473970040362]
We propose a training-free method called Generate-on-Graph (GoG) that can generate new factual triples while exploring on Knowledge Graphs (KGs)
Specifically, we propose a selecting-generating-answering framework, which not only treat the LLM as an agent to explore on KGs, but also treat it as a KG to generate new facts based on the explored subgraph.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - Knowledge Graphs are not Created Equal: Exploring the Properties and
Structure of Real KGs [2.28438857884398]
We study 29 real knowledge graph datasets from diverse domains to analyze their properties and structural patterns.
We believe that the rich structural information contained in KGs can benefit the development of better KG models across fields.
arXiv Detail & Related papers (2023-11-10T22:18:09Z) - Reasoning on Graphs: Faithful and Interpretable Large Language Model
Reasoning [104.92384929827776]
Large language models (LLMs) have demonstrated impressive reasoning abilities in complex tasks.
They lack up-to-date knowledge and experience hallucinations during reasoning.
Knowledge graphs (KGs) offer a reliable source of knowledge for reasoning.
arXiv Detail & Related papers (2023-10-02T10:14:43Z) - On the Sweet Spot of Contrastive Views for Knowledge-enhanced
Recommendation [49.18304766331156]
We propose a new contrastive learning framework for KG-enhanced recommendation.
We construct two separate contrastive views for KG and IG, and maximize their mutual information.
Extensive experimental results on three real-world datasets demonstrate the effectiveness and efficiency of our method.
arXiv Detail & Related papers (2023-09-23T14:05:55Z) - Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for
Knowledge Graph Question Answering [16.434098552925427]
We study the KG-augmented language model approach for solving the knowledge graph question answering (KGQA) task.
We propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements.
arXiv Detail & Related papers (2023-09-20T10:42:08Z) - Identify, Align, and Integrate: Matching Knowledge Graphs to Commonsense
Reasoning Tasks [81.03233931066009]
It is critical to select a knowledge graph (KG) that is well-aligned with the given task's objective.
We show an approach to assess how well a candidate KG can correctly identify and accurately fill in gaps of reasoning for a task.
We show this KG-to-task match in 3 phases: knowledge-task identification, knowledge-task alignment, and knowledge-task integration.
arXiv Detail & Related papers (2021-04-20T18:23:45Z) - Learning to Deceive Knowledge Graph Augmented Models via Targeted
Perturbation [42.407209719347286]
Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks.
We show that, through a reinforcement learning policy, one can produce deceptively perturbed KGs.
Our findings raise doubts about KG-augmented models' ability to reason about KG information and give sensible explanations.
arXiv Detail & Related papers (2020-10-24T11:04:45Z) - Language Models are Open Knowledge Graphs [75.48081086368606]
Recent deep language models automatically acquire knowledge from large-scale corpora via pre-training.
In this paper, we propose an unsupervised method to cast the knowledge contained within language models into KGs.
We show that KGs are constructed with a single forward pass of the pre-trained language models (without fine-tuning) over the corpora.
arXiv Detail & Related papers (2020-10-22T18:01:56Z) - IterefinE: Iterative KG Refinement Embeddings using Symbolic Knowledge [10.689559910656474]
Knowledge Graphs (KGs) extracted from text sources are often noisy and lead to poor performance in downstream application tasks such as KG-based question answering.
Most successful techniques for KG refinement make use of inference rules and reasoning oversupervised.
In this paper, we present a KG refinement framework called IterefinE which iteratively combines the two techniques.
arXiv Detail & Related papers (2020-06-03T14:05:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.