Learning to Deceive Knowledge Graph Augmented Models via Targeted
Perturbation
- URL: http://arxiv.org/abs/2010.12872v6
- Date: Mon, 3 May 2021 18:38:15 GMT
- Title: Learning to Deceive Knowledge Graph Augmented Models via Targeted
Perturbation
- Authors: Mrigank Raman, Aaron Chan, Siddhant Agarwal, Peifeng Wang, Hansen
Wang, Sungchul Kim, Ryan Rossi, Handong Zhao, Nedim Lipka, Xiang Ren
- Abstract summary: Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks.
We show that, through a reinforcement learning policy, one can produce deceptively perturbed KGs.
Our findings raise doubts about KG-augmented models' ability to reason about KG information and give sensible explanations.
- Score: 42.407209719347286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graphs (KGs) have helped neural models improve performance on
various knowledge-intensive tasks, like question answering and item
recommendation. By using attention over the KG, such KG-augmented models can
also "explain" which KG information was most relevant for making a given
prediction. In this paper, we question whether these models are really behaving
as we expect. We show that, through a reinforcement learning policy (or even
simple heuristics), one can produce deceptively perturbed KGs, which maintain
the downstream performance of the original KG while significantly deviating
from the original KG's semantics and structure. Our findings raise doubts about
KG-augmented models' ability to reason about KG information and give sensible
explanations.
Related papers
- Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - Graph-constrained Reasoning: Faithful Reasoning on Knowledge Graphs with Large Language Models [83.28737898989694]
Large language models (LLMs) struggle with faithful reasoning due to knowledge gaps and hallucinations.
We introduce graph-constrained reasoning (GCR), a novel framework that bridges structured knowledge in KGs with unstructured reasoning in LLMs.
GCR achieves state-of-the-art performance and exhibits strong zero-shot generalizability to unseen KGs without additional training.
arXiv Detail & Related papers (2024-10-16T22:55:17Z) - Context Graph [8.02985792541121]
We present a context graph reasoning textbfCGR$3$ paradigm that leverages large language models (LLMs) to retrieve candidate entities and related contexts.
Our experimental results demonstrate that CGR$3$ significantly improves performance on KG completion (KGC) and KG question answering (KGQA) tasks.
arXiv Detail & Related papers (2024-06-17T02:59:19Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - Knowledge Graphs are not Created Equal: Exploring the Properties and
Structure of Real KGs [2.28438857884398]
We study 29 real knowledge graph datasets from diverse domains to analyze their properties and structural patterns.
We believe that the rich structural information contained in KGs can benefit the development of better KG models across fields.
arXiv Detail & Related papers (2023-11-10T22:18:09Z) - Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for
Knowledge Graph Question Answering [16.434098552925427]
We study the KG-augmented language model approach for solving the knowledge graph question answering (KGQA) task.
We propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements.
arXiv Detail & Related papers (2023-09-20T10:42:08Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z) - SalKG: Learning From Knowledge Graph Explanations for Commonsense
Reasoning [29.148731802458983]
Augmenting language models with knowledge graphs (KGs) has achieved success on various commonsense reasoning tasks.
We propose SalKG, a framework for learning from KG explanations of both coarse (Is the KG salient?) and fine (Which parts of the KG are salient?)
We find that SalKG's training process can consistently improve model performance.
arXiv Detail & Related papers (2021-04-18T09:59:46Z) - Language Models are Open Knowledge Graphs [75.48081086368606]
Recent deep language models automatically acquire knowledge from large-scale corpora via pre-training.
In this paper, we propose an unsupervised method to cast the knowledge contained within language models into KGs.
We show that KGs are constructed with a single forward pass of the pre-trained language models (without fine-tuning) over the corpora.
arXiv Detail & Related papers (2020-10-22T18:01:56Z) - IterefinE: Iterative KG Refinement Embeddings using Symbolic Knowledge [10.689559910656474]
Knowledge Graphs (KGs) extracted from text sources are often noisy and lead to poor performance in downstream application tasks such as KG-based question answering.
Most successful techniques for KG refinement make use of inference rules and reasoning oversupervised.
In this paper, we present a KG refinement framework called IterefinE which iteratively combines the two techniques.
arXiv Detail & Related papers (2020-06-03T14:05:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.