Adversarial Robustness of Representation Learning for Knowledge Graphs
- URL: http://arxiv.org/abs/2210.00122v1
- Date: Fri, 30 Sep 2022 22:41:22 GMT
- Title: Adversarial Robustness of Representation Learning for Knowledge Graphs
- Authors: Peru Bhardwaj
- Abstract summary: This thesis argues that state-of-the-art Knowledge Graph Embeddings (KGE) models are vulnerable to data poisoning attacks.
Two novel data poisoning attacks are proposed that craft input deletions or additions at training time to subvert the learned model's performance at inference time.
The evaluation shows that the simpler attacks are competitive with or outperform the computationally expensive ones.
- Score: 7.5765554531658665
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Knowledge graphs represent factual knowledge about the world as relationships
between concepts and are critical for intelligent decision making in enterprise
applications. New knowledge is inferred from the existing facts in the
knowledge graphs by encoding the concepts and relations into low-dimensional
feature vector representations. The most effective representations for this
task, called Knowledge Graph Embeddings (KGE), are learned through neural
network architectures. Due to their impressive predictive performance, they are
increasingly used in high-impact domains like healthcare, finance and
education. However, are the black-box KGE models adversarially robust for use
in domains with high stakes? This thesis argues that state-of-the-art KGE
models are vulnerable to data poisoning attacks, that is, their predictive
performance can be degraded by systematically crafted perturbations to the
training knowledge graph. To support this argument, two novel data poisoning
attacks are proposed that craft input deletions or additions at training time
to subvert the learned model's performance at inference time. These adversarial
attacks target the task of predicting the missing facts in knowledge graphs
using KGE models, and the evaluation shows that the simpler attacks are
competitive with or outperform the computationally expensive ones. The thesis
contributions not only highlight and provide an opportunity to fix the security
vulnerabilities of KGE models, but also help to understand the black-box
predictive behaviour of KGE models.
Related papers
- Resilience in Knowledge Graph Embeddings [1.90894751866253]
We give a unified definition of resilience, encompassing several factors such as generalisation, performance consistency, distribution adaption, and robustness.
Our survey results show that most of the existing works focus on a specific aspect of resilience, namely robustness.
arXiv Detail & Related papers (2024-10-28T16:04:22Z) - KGV: Integrating Large Language Models with Knowledge Graphs for Cyber Threat Intelligence Credibility Assessment [38.312774244521]
We propose a knowledge graph-based verifier for Cyber Threat Intelligence (CTI) quality assessment framework.
Our approach introduces Large Language Models (LLMs) to automatically extract OSCTI key claims to be verified.
To fill the gap in the research field, we created and made public the first dataset for threat intelligence assessment from heterogeneous sources.
arXiv Detail & Related papers (2024-08-15T11:32:46Z) - Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks [50.87615167799367]
We certify Graph Neural Networks (GNNs) against poisoning attacks, including backdoors, targeting the node features of a given graph.
Our framework provides fundamental insights into the role of graph structure and its connectivity on the worst-case behavior of convolution-based and PageRank-based GNNs.
arXiv Detail & Related papers (2024-07-15T16:12:51Z) - Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - CausE: Towards Causal Knowledge Graph Embedding [13.016173217017597]
Knowledge graph embedding (KGE) focuses on representing the entities and relations of a knowledge graph (KG) into the continuous vector spaces.
We build the new paradigm of KGE in the context of causality and embedding disentanglement.
We propose a Causality-enhanced knowledge graph Embedding (CausE) framework.
arXiv Detail & Related papers (2023-07-21T14:25:39Z) - Mitigating Relational Bias on Knowledge Graphs [51.346018842327865]
We propose Fair-KGNN, a framework that simultaneously alleviates multi-hop bias and preserves the proximity information of entity-to-relation in knowledge graphs.
We develop two instances of Fair-KGNN incorporating with two state-of-the-art KGNN models, RGCN and CompGCN, to mitigate gender-occupation and nationality-salary bias.
arXiv Detail & Related papers (2022-11-26T05:55:34Z) - Poisoning Knowledge Graph Embeddings via Relation Inference Patterns [8.793721044482613]
We study the problem of generating data poisoning attacks against Knowledge Graph Embedding (KGE) models for the task of link prediction in knowledge graphs.
To poison KGE models, we propose to exploit their inductive abilities which are captured through the relationship patterns like symmetry, inversion and composition in the knowledge graph.
arXiv Detail & Related papers (2021-11-11T17:57:37Z) - Adversarial Attacks on Knowledge Graph Embeddings via Instance
Attribution Methods [8.793721044482613]
We study data poisoning attacks against Knowledge Graph Embeddings (KGE) models for link prediction.
These attacks craft adversarial additions or deletions at training time to cause model failure at test time.
We propose a method to replace one of the two entities in each influential triple to generate adversarial additions.
arXiv Detail & Related papers (2021-11-04T19:38:48Z) - RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding [50.010601631982425]
This paper extends the random walk model (Arora et al., 2016a) of word embeddings to Knowledge Graph Embeddings (KGEs)
We derive a scoring function that evaluates the strength of a relation R between two entities h (head) and t (tail)
We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph.
arXiv Detail & Related papers (2021-01-25T13:31:29Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.