On the Security Risks of Knowledge Graph Reasoning
- URL: http://arxiv.org/abs/2305.02383v2
- Date: Thu, 22 Jun 2023 06:17:30 GMT
- Title: On the Security Risks of Knowledge Graph Reasoning
- Authors: Zhaohan Xi and Tianyu Du and Changjiang Li and Ren Pang and Shouling
Ji and Xiapu Luo and Xusheng Xiao and Fenglong Ma and Ting Wang
- Abstract summary: We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
- Score: 71.64027889145261
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge graph reasoning (KGR) -- answering complex logical queries over
large knowledge graphs -- represents an important artificial intelligence task,
entailing a range of applications (e.g., cyber threat hunting). However,
despite its surging popularity, the potential security risks of KGR are largely
unexplored, which is concerning, given the increasing use of such capability in
security-critical domains.
This work represents a solid initial step towards bridging the striking gap.
We systematize the security threats to KGR according to the adversary's
objectives, knowledge, and attack vectors. Further, we present ROAR, a new
class of attacks that instantiate a variety of such threats. Through empirical
evaluation in representative use cases (e.g., medical decision support, cyber
threat hunting, and commonsense reasoning), we demonstrate that ROAR is highly
effective to mislead KGR to suggest pre-defined answers for target queries, yet
with negligible impact on non-target ones. Finally, we explore potential
countermeasures against ROAR, including filtering of potentially poisoning
knowledge and training with adversarially augmented queries, which leads to
several promising research directions.
Related papers
- HijackRAG: Hijacking Attacks against Retrieval-Augmented Large Language Models [18.301965456681764]
We reveal a novel vulnerability, the retrieval prompt hijack attack (HijackRAG)
HijackRAG enables attackers to manipulate the retrieval mechanisms of RAG systems by injecting malicious texts into the knowledge database.
We propose both black-box and white-box attack strategies tailored to different levels of the attacker's knowledge.
arXiv Detail & Related papers (2024-10-30T09:15:51Z) - Using Retriever Augmented Large Language Models for Attack Graph Generation [0.7619404259039284]
This paper explores the approach of leveraging large language models (LLMs) to automate the generation of attack graphs.
It shows how to utilize Common Vulnerabilities and Exposures (CommonLLMs) to create attack graphs from threat reports.
arXiv Detail & Related papers (2024-08-11T19:59:08Z) - The Shadow of Fraud: The Emerging Danger of AI-powered Social Engineering and its Possible Cure [30.431292911543103]
Social engineering (SE) attacks remain a significant threat to both individuals and organizations.
The advancement of Artificial Intelligence (AI) has potentially intensified these threats by enabling more personalized and convincing attacks.
This survey paper categorizes SE attack mechanisms, analyzes their evolution, and explores methods for measuring these threats.
arXiv Detail & Related papers (2024-07-22T17:37:31Z) - "Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models [74.05368440735468]
Retrieval-Augmented Generative (RAG) models enhance Large Language Models (LLMs)
In this paper, we demonstrate a security threat where adversaries can exploit the openness of these knowledge bases.
arXiv Detail & Related papers (2024-06-26T05:36:23Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Pre-trained Trojan Attacks for Visual Recognition [106.13792185398863]
Pre-trained vision models (PVMs) have become a dominant component due to their exceptional performance when fine-tuned for downstream tasks.
We propose the Pre-trained Trojan attack, which embeds backdoors into a PVM, enabling attacks across various downstream vision tasks.
We highlight the challenges posed by cross-task activation and shortcut connections in successful backdoor attacks.
arXiv Detail & Related papers (2023-12-23T05:51:40Z) - Eroding Trust In Aerial Imagery: Comprehensive Analysis and Evaluation
Of Adversarial Attacks In Geospatial Systems [24.953306643091484]
We show how adversarial attacks can degrade confidence in geospatial systems.
We empirically show their threat to remote sensing systems using high-quality SpaceNet datasets.
arXiv Detail & Related papers (2023-12-12T16:05:12Z) - SAGE: Intrusion Alert-driven Attack Graph Extractor [4.530678016396476]
Attack graphs (AGs) are used to assess pathways availed by cyber adversaries to penetrate a network.
We propose to automatically learn AGs based on actions observed through intrusion alerts, without prior expert knowledge.
arXiv Detail & Related papers (2021-07-06T17:45:02Z) - A System for Automated Open-Source Threat Intelligence Gathering and
Management [53.65687495231605]
SecurityKG is a system for automated OSCTI gathering and management.
It uses a combination of AI and NLP techniques to extract high-fidelity knowledge about threat behaviors.
arXiv Detail & Related papers (2021-01-19T18:31:35Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.