SAGE: Intrusion Alert-driven Attack Graph Extractor
- URL: http://arxiv.org/abs/2107.02783v1
- Date: Tue, 6 Jul 2021 17:45:02 GMT
- Title: SAGE: Intrusion Alert-driven Attack Graph Extractor
- Authors: Azqa Nadeem, Sicco Verwer, Stephen Moskal, Shanchieh Jay Yang
- Abstract summary: Attack graphs (AGs) are used to assess pathways availed by cyber adversaries to penetrate a network.
We propose to automatically learn AGs based on actions observed through intrusion alerts, without prior expert knowledge.
- Score: 4.530678016396476
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Attack graphs (AG) are used to assess pathways availed by cyber adversaries
to penetrate a network. State-of-the-art approaches for AG generation focus
mostly on deriving dependencies between system vulnerabilities based on network
scans and expert knowledge. In real-world operations however, it is costly and
ineffective to rely on constant vulnerability scanning and expert-crafted AGs.
We propose to automatically learn AGs based on actions observed through
intrusion alerts, without prior expert knowledge. Specifically, we develop an
unsupervised sequence learning system, SAGE, that leverages the temporal and
probabilistic dependence between alerts in a suffix-based probabilistic
deterministic finite automaton (S-PDFA) -- a model that accentuates infrequent
severe alerts and summarizes paths leading to them. AGs are then derived from
the S-PDFA. Tested with intrusion alerts collected through Collegiate
Penetration Testing Competition, SAGE produces AGs that reflect the strategies
used by participating teams. The resulting AGs are succinct, interpretable, and
enable analysts to derive actionable insights, e.g., attackers tend to follow
shorter paths after they have discovered a longer one.
Related papers
- Pre-trained Trojan Attacks for Visual Recognition [106.13792185398863]
Pre-trained vision models (PVMs) have become a dominant component due to their exceptional performance when fine-tuned for downstream tasks.
We propose the Pre-trained Trojan attack, which embeds backdoors into a PVM, enabling attacks across various downstream vision tasks.
We highlight the challenges posed by cross-task activation and shortcut connections in successful backdoor attacks.
arXiv Detail & Related papers (2023-12-23T05:51:40Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - Prepare for Trouble and Make it Double. Supervised and Unsupervised
Stacking for AnomalyBased Intrusion Detection [4.56877715768796]
We propose the adoption of meta-learning, in the form of a two-layer Stacker, to create a mixed approach that detects both known and unknown threats.
It turns out to be more effective in detecting zero-day attacks than supervised algorithms, limiting their main weakness but still maintaining adequate capabilities in detecting known attacks.
arXiv Detail & Related papers (2022-02-28T08:41:32Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - A Rule Mining-Based Advanced Persistent Threats Detection System [2.75264806444313]
Advanced persistent threats (APT) are stealthy cyber-attacks aimed at stealing valuable information from target organizations.
Provenance-tracking and trace mining are considered promising as they can help find causal relationships between activities and flag suspicious event sequences as they occur.
We introduce an unsupervised method that exploits OS-independent features reflecting process activity to detect realistic APT-like attacks from provenance traces.
arXiv Detail & Related papers (2021-05-20T22:13:13Z) - Online Adversarial Purification based on Self-Supervision [6.821598757786515]
We present Self-supervised Online Adrial Purification (SOAP), a novel defense strategy that uses a self-supervised loss to purify adversarial examples at test-time.
SOAP yields competitive robust accuracy against state-of-the-art adversarial training and purification methods.
To the best of our knowledge, our paper is the first that generalizes the idea of using self-supervised signals to perform online test-time purification.
arXiv Detail & Related papers (2021-01-23T00:19:52Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.