Performance Evaluation of Knowledge Graph Embedding Approaches under Non-adversarial Attacks
- URL: http://arxiv.org/abs/2407.06855v1
- Date: Tue, 9 Jul 2024 13:42:14 GMT
- Title: Performance Evaluation of Knowledge Graph Embedding Approaches under Non-adversarial Attacks
- Authors: Sourabh Kapoor, Arnab Sharma, Michael Röder, Caglar Demir, Axel-Cyrille Ngonga Ngomo,
- Abstract summary: We evaluate the impact of non-adversarial attacks on the performance of 5 state-of-the-art Knowledge Graph Embedding (KGE) algorithms.
Label perturbation has a strong effect on the KGE performance, followed by parameter perturbation with a moderate and graph with a low effect.
- Score: 1.6986898305640263
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge Graph Embedding (KGE) transforms a discrete Knowledge Graph (KG) into a continuous vector space facilitating its use in various AI-driven applications like Semantic Search, Question Answering, or Recommenders. While KGE approaches are effective in these applications, most existing approaches assume that all information in the given KG is correct. This enables attackers to influence the output of these approaches, e.g., by perturbing the input. Consequently, the robustness of such KGE approaches has to be addressed. Recent work focused on adversarial attacks. However, non-adversarial attacks on all attack surfaces of these approaches have not been thoroughly examined. We close this gap by evaluating the impact of non-adversarial attacks on the performance of 5 state-of-the-art KGE algorithms on 5 datasets with respect to attacks on 3 attack surfaces-graph, parameter, and label perturbation. Our evaluation results suggest that label perturbation has a strong effect on the KGE performance, followed by parameter perturbation with a moderate and graph with a low effect.
Related papers
- GAIM: Attacking Graph Neural Networks via Adversarial Influence Maximization [12.651079126612121]
We present an integrated adversarial attack method conducted on a node basis while considering the strict black-box setting.
In our approach, we unify the selection of the target node and the construction of feature perturbations into a single optimization problem.
We extend our method to accommodate label-oriented attacks, broadening its applicability.
arXiv Detail & Related papers (2024-08-20T15:41:20Z) - Top K Enhanced Reinforcement Learning Attacks on Heterogeneous Graph Node Classification [1.4943280454145231]
Graph Neural Networks (GNNs) have attracted substantial interest due to their exceptional performance on graph-based data.
Their robustness, especially on heterogeneous graphs, remains underexplored, particularly against adversarial attacks.
This paper proposes HeteroKRLAttack, a targeted evasion black-box attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-08-04T08:44:00Z) - Untargeted Adversarial Attack on Knowledge Graph Embeddings [18.715565468700227]
Knowledge graph embedding (KGE) methods have achieved great success in handling various knowledge graph (KG) downstream tasks.
Some recent studies propose adversarial attacks to investigate the vulnerabilities of KGE methods, but their attackers are target-oriented with the KGE method.
In this work, we explore untargeted attacks with the aim of reducing the global performances of KGE methods over a set of unknown test triples.
arXiv Detail & Related papers (2024-05-08T18:08:11Z) - Revisiting Edge Perturbation for Graph Neural Network in Graph Data
Augmentation and Attack [58.440711902319855]
Edge perturbation is a method to modify graph structures.
It can be categorized into two veins based on their effects on the performance of graph neural networks (GNNs)
We propose a unified formulation and establish a clear boundary between two categories of edge perturbation methods.
arXiv Detail & Related papers (2024-03-10T15:50:04Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured Data from Unauthorized Exploitation [61.80017550099027]
Graph Neural Networks (GNNs) are increasingly prevalent in a variety of fields.
Growing concerns have emerged regarding the unauthorized utilization of personal data.
Recent studies have shown that imperceptible poisoning attacks are an effective method of protecting image data from such misuse.
This paper introduces GraphCloak to safeguard against the unauthorized usage of graph data.
arXiv Detail & Related papers (2023-10-11T00:50:55Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - IDEA: Invariant Defense for Graph Adversarial Robustness [60.0126873387533]
We propose an Invariant causal DEfense method against adversarial Attacks (IDEA)
We derive node-based and structure-based invariance objectives from an information-theoretic perspective.
Experiments demonstrate that IDEA attains state-of-the-art defense performance under all five attacks on all five datasets.
arXiv Detail & Related papers (2023-05-25T07:16:00Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Adversarial Attacks on Knowledge Graph Embeddings via Instance
Attribution Methods [8.793721044482613]
We study data poisoning attacks against Knowledge Graph Embeddings (KGE) models for link prediction.
These attacks craft adversarial additions or deletions at training time to cause model failure at test time.
We propose a method to replace one of the two entities in each influential triple to generate adversarial additions.
arXiv Detail & Related papers (2021-11-04T19:38:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.