From Sands to Mansions: Enabling Automatic Full-Life-Cycle Cyberattack Construction with LLM
- URL: http://arxiv.org/abs/2407.16928v1
- Date: Wed, 24 Jul 2024 01:33:57 GMT
- Title: From Sands to Mansions: Enabling Automatic Full-Life-Cycle Cyberattack Construction with LLM
- Authors: Lingzhi Wang, Jiahui Wang, Kyle Jung, Kedar Thiagarajan, Emily Wei, Xiangmin Shen, Yan Chen, Zhenyuan Li,
- Abstract summary: Existing cyberattack simulation frameworks face challenges such as limited technical coverage, inability to conduct full-life-cycle attacks, and the need for manual infrastructure building.
We proposed AURORA, an automatic end-to-end cyberattack construction and emulation framework.
AURORA can autonomously build multi-stage cyberattack plans based on Cyber Threat Intelligence (CTI) reports, construct the emulation infrastructures, and execute the attack procedures.
- Score: 6.534605400247412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The escalating battles between attackers and defenders in cybersecurity make it imperative to test and evaluate defense capabilities from the attackers' perspective. However, constructing full-life-cycle cyberattacks and performing red team emulations requires significant time and domain knowledge from security experts. Existing cyberattack simulation frameworks face challenges such as limited technical coverage, inability to conduct full-life-cycle attacks, and the need for manual infrastructure building. These limitations hinder the quality and diversity of the constructed attacks. In this paper, we leveraged the capabilities of Large Language Models (LLMs) in summarizing knowledge from existing attack intelligence and generating executable machine code based on human knowledge. we proposed AURORA, an automatic end-to-end cyberattack construction and emulation framework. AURORA can autonomously build multi-stage cyberattack plans based on Cyber Threat Intelligence (CTI) reports, construct the emulation infrastructures, and execute the attack procedures. We also developed an attack procedure knowledge graph to integrate knowledge about attack techniques throughout the full life cycle of advanced cyberattacks from various sources. We constructed and evaluated more than 20 full-life-cycle cyberattacks based on existing CTI reports. Compared to previous attack simulation frameworks, AURORA can construct multi-step attacks and the infrastructures in several minutes without human intervention. Furthermore, AURORA incorporates a wider range (40% more) of attack techniques into the constructed attacks in a more efficient way than the professional red teams. To benefit further research, we open-sourced the dataset containing the execution files and infrastructures of 20 emulated cyberattacks.
Related papers
- Towards in-situ Psychological Profiling of Cybercriminals Using Dynamically Generated Deception Environments [0.0]
Cybercrime is estimated to cost the global economy almost $10 trillion annually.
Traditional perimeter security approach to cyber defence has so far proved inadequate to combat the growing threat of cybercrime.
Deceptive techniques aim to mislead attackers, diverting them from critical assets whilst simultaneously gathering cyber threat intelligence on the threat actor.
This article presents a proof-of-concept system that has been developed to capture the profile of an attacker in-situ, during a simulated cyber-attack in real time.
arXiv Detail & Related papers (2024-05-19T09:48:59Z) - SEvenLLM: Benchmarking, Eliciting, and Enhancing Abilities of Large Language Models in Cyber Threat Intelligence [27.550484938124193]
This paper introduces a framework to benchmark, elicit, and improve cybersecurity incident analysis and response abilities.
We create a high-quality bilingual instruction corpus by crawling cybersecurity raw text from cybersecurity websites.
The instruction dataset SEvenLLM-Instruct is used to train cybersecurity LLMs with the multi-task learning objective.
arXiv Detail & Related papers (2024-05-06T13:17:43Z) - Use of Graph Neural Networks in Aiding Defensive Cyber Operations [2.1874189959020427]
Graph Neural Networks have emerged as a promising approach for enhancing the effectiveness of defensive measures.
We look into the application of GNNs in aiding to break each stage of one of the most renowned attack life cycles, the Lockheed Martin Cyber Kill Chain.
arXiv Detail & Related papers (2024-01-11T05:56:29Z) - Attack Prompt Generation for Red Teaming and Defending Large Language
Models [70.157691818224]
Large language models (LLMs) are susceptible to red teaming attacks, which can induce LLMs to generate harmful content.
We propose an integrated approach that combines manual and automatic methods to economically generate high-quality attack prompts.
arXiv Detail & Related papers (2023-10-19T06:15:05Z) - Towards Automated Classification of Attackers' TTPs by combining NLP
with ML Techniques [77.34726150561087]
We evaluate and compare different Natural Language Processing (NLP) and machine learning techniques used for security information extraction in research.
Based on our investigations we propose a data processing pipeline that automatically classifies unstructured text according to attackers' tactics and techniques.
arXiv Detail & Related papers (2022-07-18T09:59:21Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Reinforcement Learning for Feedback-Enabled Cyber Resilience [24.92055101652206]
Cyber resilience provides a new security paradigm that complements inadequate protection with resilience mechanisms.
A Cyber-Resilient Mechanism ( CRM) adapts to the known or zero-day threats and uncertainties in real-time.
We review the literature on RL for cyber resiliency and discuss the cyber-resilient defenses against three major types of vulnerabilities.
arXiv Detail & Related papers (2021-07-02T01:08:45Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.