Learning to Deceive Knowledge Graph Augmented Models via Targeted
Perturbation
- URL: http://arxiv.org/abs/2010.12872v6
- Date: Mon, 3 May 2021 18:38:15 GMT
- Title: Learning to Deceive Knowledge Graph Augmented Models via Targeted
Perturbation
- Authors: Mrigank Raman, Aaron Chan, Siddhant Agarwal, Peifeng Wang, Hansen
Wang, Sungchul Kim, Ryan Rossi, Handong Zhao, Nedim Lipka, Xiang Ren
- Abstract summary: Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks.
We show that, through a reinforcement learning policy, one can produce deceptively perturbed KGs.
Our findings raise doubts about KG-augmented models' ability to reason about KG information and give sensible explanations.
- Score: 42.407209719347286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graphs (KGs) have helped neural models improve performance on
various knowledge-intensive tasks, like question answering and item
recommendation. By using attention over the KG, such KG-augmented models can
also "explain" which KG information was most relevant for making a given
prediction. In this paper, we question whether these models are really behaving
as we expect. We show that, through a reinforcement learning policy (or even
simple heuristics), one can produce deceptively perturbed KGs, which maintain
the downstream performance of the original KG while significantly deviating
from the original KG's semantics and structure. Our findings raise doubts about
KG-augmented models' ability to reason about KG information and give sensible
explanations.
Related papers
- Context Graph [8.02985792541121]
We present a context graph reasoning textbfCGR$3$ paradigm that leverages large language models (LLMs) to retrieve candidate entities and related contexts.
Our experimental results demonstrate that CGR$3$ significantly improves performance on KG completion (KGC) and KG question answering (KGQA) tasks.
arXiv Detail & Related papers (2024-06-17T02:59:19Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [90.30473970040362]
We propose a training-free method called Generate-on-Graph (GoG) that can generate new factual triples while exploring on Knowledge Graphs (KGs)
Specifically, we propose a selecting-generating-answering framework, which not only treat the LLM as an agent to explore on KGs, but also treat it as a KG to generate new facts based on the explored subgraph.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - Knowledge Graphs are not Created Equal: Exploring the Properties and
Structure of Real KGs [2.28438857884398]
We study 29 real knowledge graph datasets from diverse domains to analyze their properties and structural patterns.
We believe that the rich structural information contained in KGs can benefit the development of better KG models across fields.
arXiv Detail & Related papers (2023-11-10T22:18:09Z) - Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for
Knowledge Graph Question Answering [16.434098552925427]
We study the KG-augmented language model approach for solving the knowledge graph question answering (KGQA) task.
We propose an answer-sensitive KG-to-Text approach that can transform KG knowledge into well-textualized statements.
arXiv Detail & Related papers (2023-09-20T10:42:08Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z) - MEKER: Memory Efficient Knowledge Embedding Representation for Link
Prediction and Question Answering [65.62309538202771]
Knowledge Graphs (KGs) are symbolically structured storages of facts.
KG embedding contains concise data used in NLP tasks requiring implicit information about the real world.
We propose a memory-efficient KG embedding model, which yields SOTA-comparable performance on link prediction tasks and KG-based Question Answering.
arXiv Detail & Related papers (2022-04-22T10:47:03Z) - SalKG: Learning From Knowledge Graph Explanations for Commonsense
Reasoning [29.148731802458983]
Augmenting language models with knowledge graphs (KGs) has achieved success on various commonsense reasoning tasks.
We propose SalKG, a framework for learning from KG explanations of both coarse (Is the KG salient?) and fine (Which parts of the KG are salient?)
We find that SalKG's training process can consistently improve model performance.
arXiv Detail & Related papers (2021-04-18T09:59:46Z) - Language Models are Open Knowledge Graphs [75.48081086368606]
Recent deep language models automatically acquire knowledge from large-scale corpora via pre-training.
In this paper, we propose an unsupervised method to cast the knowledge contained within language models into KGs.
We show that KGs are constructed with a single forward pass of the pre-trained language models (without fine-tuning) over the corpora.
arXiv Detail & Related papers (2020-10-22T18:01:56Z) - IterefinE: Iterative KG Refinement Embeddings using Symbolic Knowledge [10.689559910656474]
Knowledge Graphs (KGs) extracted from text sources are often noisy and lead to poor performance in downstream application tasks such as KG-based question answering.
Most successful techniques for KG refinement make use of inference rules and reasoning oversupervised.
In this paper, we present a KG refinement framework called IterefinE which iteratively combines the two techniques.
arXiv Detail & Related papers (2020-06-03T14:05:54Z) - Toward Subgraph-Guided Knowledge Graph Question Generation with Graph
Neural Networks [53.58077686470096]
Knowledge graph (KG) question generation (QG) aims to generate natural language questions from KGs and target answers.
In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.
arXiv Detail & Related papers (2020-04-13T15:43:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.