Using Retriever Augmented Large Language Models for Attack Graph Generation
- URL: http://arxiv.org/abs/2408.05855v1
- Date: Sun, 11 Aug 2024 19:59:08 GMT
- Title: Using Retriever Augmented Large Language Models for Attack Graph Generation
- Authors: Renascence Tarafder Prapty, Ashish Kundu, Arun Iyengar,
- Abstract summary: This paper explores the approach of leveraging large language models (LLMs) to automate the generation of attack graphs.
It shows how to utilize Common Vulnerabilities and Exposures (CommonLLMs) to create attack graphs from threat reports.
- Score: 0.7619404259039284
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the complexity of modern systems increases, so does the importance of assessing their security posture through effective vulnerability management and threat modeling techniques. One powerful tool in the arsenal of cybersecurity professionals is the attack graph, a representation of all potential attack paths within a system that an adversary might exploit to achieve a certain objective. Traditional methods of generating attack graphs involve expert knowledge, manual curation, and computational algorithms that might not cover the entire threat landscape due to the ever-evolving nature of vulnerabilities and exploits. This paper explores the approach of leveraging large language models (LLMs), such as ChatGPT, to automate the generation of attack graphs by intelligently chaining Common Vulnerabilities and Exposures (CVEs) based on their preconditions and effects. It also shows how to utilize LLMs to create attack graphs from threat reports.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Cyber Knowledge Completion Using Large Language Models [1.4883782513177093]
Integrating the Internet of Things (IoT) into Cyber-Physical Systems (CPSs) has expanded their cyber-attack surface.
Assessing the risks of CPSs is increasingly difficult due to incomplete and outdated cybersecurity knowledge.
Recent advancements in Large Language Models (LLMs) present a unique opportunity to enhance cyber-attack knowledge completion.
arXiv Detail & Related papers (2024-09-24T15:20:39Z) - Safety in Graph Machine Learning: Threats and Safeguards [84.26643884225834]
Despite their societal benefits, recent research highlights significant safety concerns associated with the widespread use of Graph ML models.
Lacking safety-focused designs, these models can produce unreliable predictions, demonstrate poor generalizability, and compromise data confidentiality.
In high-stakes scenarios such as financial fraud detection, these vulnerabilities could jeopardize both individuals and society at large.
arXiv Detail & Related papers (2024-05-17T18:11:11Z) - AI-based Attack Graph Generation [7.282532608209566]
Attack graphs are widely used to assess security threats within networks.
A drawback emerges as the network scales, as generating attack graphs becomes time-consuming.
By utilizing AI models, attack graphs can be created within a short period, approximating optimal outcomes.
arXiv Detail & Related papers (2023-11-24T08:35:16Z) - Streamlining Attack Tree Generation: A Fragment-Based Approach [39.157069600312774]
We present a novel fragment-based attack graph generation approach that utilizes information from publicly available information security databases.
We also propose a domain-specific language for attack modeling, which we employ in the proposed attack graph generation approach.
arXiv Detail & Related papers (2023-10-01T12:41:38Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Looking Beyond IoCs: Automatically Extracting Attack Patterns from
External CTI [3.871148938060281]
LADDER is a framework that can extract text-based attack patterns from cyberthreat intelligence reports at scale.
We present several use cases to demonstrate the application of LADDER in real-world scenarios.
arXiv Detail & Related papers (2022-11-01T12:16:30Z) - GraphMAE: Self-Supervised Masked Graph Autoencoders [52.06140191214428]
We present a masked graph autoencoder GraphMAE that mitigates issues for generative self-supervised graph learning.
We conduct extensive experiments on 21 public datasets for three different graph learning tasks.
The results manifest that GraphMAE--a simple graph autoencoder with our careful designs--can consistently generate outperformance over both contrastive and generative state-of-the-art baselines.
arXiv Detail & Related papers (2022-05-22T11:57:08Z) - Model Extraction Attacks on Graph Neural Networks: Taxonomy and
Realization [40.37373934201329]
We investigate and develop model extraction attacks against GNN models.
We first formalise the threat modelling in the context of GNN model extraction.
We then present detailed methods which utilise the accessible knowledge in each threat to implement the attacks.
arXiv Detail & Related papers (2020-10-24T03:09:37Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.