Untargeted Adversarial Attack on Knowledge Graph Embeddings
- URL: http://arxiv.org/abs/2405.10970v1
- Date: Wed, 8 May 2024 18:08:11 GMT
- Title: Untargeted Adversarial Attack on Knowledge Graph Embeddings
- Authors: Tianzhe Zhao, Jiaoyan Chen, Yanchi Ru, Qika Lin, Yuxia Geng, Jun Liu,
- Abstract summary: Knowledge graph embedding (KGE) methods have achieved great success in handling various knowledge graph (KG) downstream tasks.
Some recent studies propose adversarial attacks to investigate the vulnerabilities of KGE methods, but their attackers are target-oriented with the KGE method.
In this work, we explore untargeted attacks with the aim of reducing the global performances of KGE methods over a set of unknown test triples.
- Score: 18.715565468700227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graph embedding (KGE) methods have achieved great success in handling various knowledge graph (KG) downstream tasks. However, KGE methods may learn biased representations on low-quality KGs that are prevalent in the real world. Some recent studies propose adversarial attacks to investigate the vulnerabilities of KGE methods, but their attackers are target-oriented with the KGE method and the target triples to predict are given in advance, which lacks practicability. In this work, we explore untargeted attacks with the aim of reducing the global performances of KGE methods over a set of unknown test triples and conducting systematic analyses on KGE robustness. Considering logic rules can effectively summarize the global structure of a KG, we develop rule-based attack strategies to enhance the attack efficiency. In particular,we consider adversarial deletion which learns rules, applying the rules to score triple importance and delete important triples, and adversarial addition which corrupts the learned rules and applies them for negative triples as perturbations. Extensive experiments on two datasets over three representative classes of KGE methods demonstrate the effectiveness of our proposed untargeted attacks in diminishing the link prediction results. And we also find that different KGE methods exhibit different robustness to untargeted attacks. For example, the robustness of methods engaged with graph neural networks and logic rules depends on the density of the graph. But rule-based methods like NCRL are easily affected by adversarial addition attacks to capture negative rules
Related papers
- Top K Enhanced Reinforcement Learning Attacks on Heterogeneous Graph Node Classification [1.4943280454145231]
Graph Neural Networks (GNNs) have attracted substantial interest due to their exceptional performance on graph-based data.
Their robustness, especially on heterogeneous graphs, remains underexplored, particularly against adversarial attacks.
This paper proposes HeteroKRLAttack, a targeted evasion black-box attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-08-04T08:44:00Z) - Performance Evaluation of Knowledge Graph Embedding Approaches under Non-adversarial Attacks [1.6986898305640263]
We evaluate the impact of non-adversarial attacks on the performance of 5 state-of-the-art Knowledge Graph Embedding (KGE) algorithms.
Label perturbation has a strong effect on the KGE performance, followed by parameter perturbation with a moderate and graph with a low effect.
arXiv Detail & Related papers (2024-07-09T13:42:14Z) - Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - On the Adversarial Robustness of Graph Contrastive Learning Methods [9.675856264585278]
We introduce a comprehensive evaluation robustness protocol tailored to assess the robustness of graph contrastive learning (GCL) models.
We subject these models to adaptive adversarial attacks targeting the graph structure, specifically in the evasion scenario.
With our work, we aim to offer insights into the robustness of GCL methods and hope to open avenues for potential future research directions.
arXiv Detail & Related papers (2023-11-29T17:59:18Z) - Adversarial Robustness of Representation Learning for Knowledge Graphs [7.5765554531658665]
This thesis argues that state-of-the-art Knowledge Graph Embeddings (KGE) models are vulnerable to data poisoning attacks.
Two novel data poisoning attacks are proposed that craft input deletions or additions at training time to subvert the learned model's performance at inference time.
The evaluation shows that the simpler attacks are competitive with or outperform the computationally expensive ones.
arXiv Detail & Related papers (2022-09-30T22:41:22Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - Adversarial Attacks on Knowledge Graph Embeddings via Instance
Attribution Methods [8.793721044482613]
We study data poisoning attacks against Knowledge Graph Embeddings (KGE) models for link prediction.
These attacks craft adversarial additions or deletions at training time to cause model failure at test time.
We propose a method to replace one of the two entities in each influential triple to generate adversarial additions.
arXiv Detail & Related papers (2021-11-04T19:38:48Z) - RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding [50.010601631982425]
This paper extends the random walk model (Arora et al., 2016a) of word embeddings to Knowledge Graph Embeddings (KGEs)
We derive a scoring function that evaluates the strength of a relation R between two entities h (head) and t (tail)
We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph.
arXiv Detail & Related papers (2021-01-25T13:31:29Z) - Reinforcement Learning-based Black-Box Evasion Attacks to Link
Prediction in Dynamic Graphs [87.5882042724041]
Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications.
We study the vulnerability of LPDG methods and propose the first practical black-box evasion attack.
arXiv Detail & Related papers (2020-09-01T01:04:49Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.