SAGE: Intrusion Alert-driven Attack Graph Extractor
- URL: http://arxiv.org/abs/2107.02783v1
- Date: Tue, 6 Jul 2021 17:45:02 GMT
- Title: SAGE: Intrusion Alert-driven Attack Graph Extractor
- Authors: Azqa Nadeem, Sicco Verwer, Stephen Moskal, Shanchieh Jay Yang
- Abstract summary: Attack graphs (AGs) are used to assess pathways availed by cyber adversaries to penetrate a network.
We propose to automatically learn AGs based on actions observed through intrusion alerts, without prior expert knowledge.
- Score: 4.530678016396476
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Attack graphs (AG) are used to assess pathways availed by cyber adversaries
to penetrate a network. State-of-the-art approaches for AG generation focus
mostly on deriving dependencies between system vulnerabilities based on network
scans and expert knowledge. In real-world operations however, it is costly and
ineffective to rely on constant vulnerability scanning and expert-crafted AGs.
We propose to automatically learn AGs based on actions observed through
intrusion alerts, without prior expert knowledge. Specifically, we develop an
unsupervised sequence learning system, SAGE, that leverages the temporal and
probabilistic dependence between alerts in a suffix-based probabilistic
deterministic finite automaton (S-PDFA) -- a model that accentuates infrequent
severe alerts and summarizes paths leading to them. AGs are then derived from
the S-PDFA. Tested with intrusion alerts collected through Collegiate
Penetration Testing Competition, SAGE produces AGs that reflect the strategies
used by participating teams. The resulting AGs are succinct, interpretable, and
enable analysts to derive actionable insights, e.g., attackers tend to follow
shorter paths after they have discovered a longer one.
Related papers
- Just Ask: Curious Code Agents Reveal System Prompts in Frontier LLMs [65.6660735371212]
We present textbftextscJustAsk, a framework that autonomously discovers effective extraction strategies through interaction alone.<n>It formulates extraction as an online exploration problem, using Upper Confidence Bound--based strategy selection and a hierarchical skill space spanning atomic probes and high-level orchestration.<n>Our results expose system prompts as a critical yet largely unprotected attack surface in modern agent systems.
arXiv Detail & Related papers (2026-01-29T03:53:25Z) - The Trojan Knowledge: Bypassing Commercial LLM Guardrails via Harmless Prompt Weaving and Adaptive Tree Search [58.8834056209347]
Large language models (LLMs) remain vulnerable to jailbreak attacks that bypass safety guardrails to elicit harmful outputs.<n>We introduce the Correlated Knowledge Attack Agent (CKA-Agent), a dynamic framework that reframes jailbreaking as an adaptive, tree-structured exploration of the target model's knowledge base.
arXiv Detail & Related papers (2025-12-01T07:05:23Z) - Security Logs to ATT&CK Insights: Leveraging LLMs for High-Level Threat Understanding and Cognitive Trait Inference [1.8135692038751479]
Real-time defense requires the ability to infer attacker intent and cognitive strategy from intrusion detection system (IDS) logs.<n>We propose a novel framework that leverages large language models (LLMs) to analyze Suricata IDS logs and infer attacker actions.<n>This lays the groundwork for future work on behaviorally adaptive cyber defense and cognitive trait inference.
arXiv Detail & Related papers (2025-10-23T18:43:31Z) - BlindGuard: Safeguarding LLM-based Multi-Agent Systems under Unknown Attacks [58.959622170433725]
BlindGuard is an unsupervised defense method that learns without requiring any attack-specific labels or prior knowledge of malicious behaviors.<n>We show that BlindGuard effectively detects diverse attack types (i.e., prompt injection, memory poisoning, and tool attack) across multi-agent systems.
arXiv Detail & Related papers (2025-08-11T16:04:47Z) - Preliminary Investigation into Uncertainty-Aware Attack Stage Classification [81.28215542218724]
This work addresses the problem of attack stage inference under uncertainty.<n>We propose a classification approach based on Evidential Deep Learning (EDL), which models predictive uncertainty by outputting parameters of a Dirichlet distribution over possible stages.<n>Preliminary experiments in a simulated environment demonstrate that the proposed model can accurately infer the stage of an attack with confidence.
arXiv Detail & Related papers (2025-08-01T06:58:00Z) - CyberRAG: An agentic RAG cyber attack classification and reporting tool [1.0345929832241807]
CyberRAG is a modular, agent-based framework that delivers real-time classification, explanation, and structured reporting for cyber-attacks.<n>Unlike traditional RAG systems, CyberRAG embraces an agentic design that enables dynamic control flow and adaptive reasoning.<n>CyberRAG has been evaluated achieving over 94% accuracy per class and pushing final classification accuracy to 94.92%.
arXiv Detail & Related papers (2025-07-03T08:32:19Z) - The Silent Saboteur: Imperceptible Adversarial Attacks against Black-Box Retrieval-Augmented Generation Systems [101.68501850486179]
We explore adversarial attacks against retrieval-augmented generation (RAG) systems to identify their vulnerabilities.<n>This task aims to find imperceptible perturbations that retrieve a target document, originally excluded from the initial top-$k$ candidate set.<n>We propose ReGENT, a reinforcement learning-based framework that tracks interactions between the attacker and the target RAG.
arXiv Detail & Related papers (2025-05-24T08:19:25Z) - OMNISEC: LLM-Driven Provenance-based Intrusion Detection via Retrieval-Augmented Behavior Prompting [4.71781133841068]
Provenance-based Intrusion Detection Systems (PIDSes) have been widely used for endpoint threat analysis.<n>Due to the evolution of attack techniques, rules cannot dynamically model all the characteristics of attackers.<n>Anomaly-based detection systems face a massive false positive problem because they cannot distinguish between changes in normal behavior and real attack behavior.
arXiv Detail & Related papers (2025-03-05T02:08:12Z) - Forecasting Attacker Actions using Alert-driven Attack Graphs [1.3812010983144802]
This paper builds an action forecasting capability on top of the alert-driven AG framework for predicting the next likely attacker action.
We also modify the framework to build AGs in real time, as new alerts are triggered.
This way, we convert alert-driven AGs into an early warning system that enables analysts circumvent ongoing attacks and break the cyber killchain.
arXiv Detail & Related papers (2024-08-19T11:04:47Z) - HADES: Detecting Active Directory Attacks via Whole Network Provenance Analytics [7.203330561731627]
Active Directory (AD) is a top target of Advanced Persistence Threat (APT) actors.
We propose HADES, the first PIDS capable of performing accurate causality-based cross-machine tracing.
We introduce a novel lightweight authentication anomaly detection model rooted in our analysis of AD attacks.
arXiv Detail & Related papers (2024-07-26T16:46:29Z) - Pre-trained Trojan Attacks for Visual Recognition [106.13792185398863]
Pre-trained vision models (PVMs) have become a dominant component due to their exceptional performance when fine-tuned for downstream tasks.
We propose the Pre-trained Trojan attack, which embeds backdoors into a PVM, enabling attacks across various downstream vision tasks.
We highlight the challenges posed by cross-task activation and shortcut connections in successful backdoor attacks.
arXiv Detail & Related papers (2023-12-23T05:51:40Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - Prepare for Trouble and Make it Double. Supervised and Unsupervised
Stacking for AnomalyBased Intrusion Detection [4.56877715768796]
We propose the adoption of meta-learning, in the form of a two-layer Stacker, to create a mixed approach that detects both known and unknown threats.
It turns out to be more effective in detecting zero-day attacks than supervised algorithms, limiting their main weakness but still maintaining adequate capabilities in detecting known attacks.
arXiv Detail & Related papers (2022-02-28T08:41:32Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - A Rule Mining-Based Advanced Persistent Threats Detection System [2.75264806444313]
Advanced persistent threats (APT) are stealthy cyber-attacks aimed at stealing valuable information from target organizations.
Provenance-tracking and trace mining are considered promising as they can help find causal relationships between activities and flag suspicious event sequences as they occur.
We introduce an unsupervised method that exploits OS-independent features reflecting process activity to detect realistic APT-like attacks from provenance traces.
arXiv Detail & Related papers (2021-05-20T22:13:13Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.