Critical Path Prioritization Dashboard for Alert-driven Attack Graphs
- URL: http://arxiv.org/abs/2310.13079v1
- Date: Thu, 19 Oct 2023 18:16:04 GMT
- Title: Critical Path Prioritization Dashboard for Alert-driven Attack Graphs
- Authors: Sònia Leal Díaz, Sergio Pastrana, Azqa Nadeem,
- Abstract summary: This paper proposes a querying and prioritization-enabled visual analytics dashboard for SAGE.
We describe the utility of the proposed dashboard using intrusion alerts collected from a distributed multi-stage team-based attack scenario.
We find that the dashboard is useful in depicting attacker strategies and attack progression, but can be improved in terms of usability.
- Score: 3.4000567392487127
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Although intrusion alerts can provide threat intelligence regarding attacker strategies, extracting such intelligence via existing tools is expensive and time-consuming. Earlier work has proposed SAGE, which generates attack graphs from intrusion alerts using unsupervised sequential machine learning. This paper proposes a querying and prioritization-enabled visual analytics dashboard for SAGE. The dashboard has three main components: (i) a Graph Explorer that presents a global view of all attacker strategies, (ii) a Timeline Viewer that correlates attacker actions chronologically, and (iii) a Recommender Matrix that highlights prevalent critical alerts via a MITRE ATT&CK-inspired attack stage matrix. We describe the utility of the proposed dashboard using intrusion alerts collected from a distributed multi-stage team-based attack scenario. We evaluate the utility of the dashboard through a user study. Based on the responses of a small set of security practitioners, we find that the dashboard is useful in depicting attacker strategies and attack progression, but can be improved in terms of usability.
Related papers
- Forecasting Attacker Actions using Alert-driven Attack Graphs [1.3812010983144802]
This paper builds an action forecasting capability on top of the alert-driven AG framework for predicting the next likely attacker action.
We also modify the framework to build AGs in real time, as new alerts are triggered.
This way, we convert alert-driven AGs into an early warning system that enables analysts circumvent ongoing attacks and break the cyber killchain.
arXiv Detail & Related papers (2024-08-19T11:04:47Z) - Using Retriever Augmented Large Language Models for Attack Graph Generation [0.7619404259039284]
This paper explores the approach of leveraging large language models (LLMs) to automate the generation of attack graphs.
It shows how to utilize Common Vulnerabilities and Exposures (CommonLLMs) to create attack graphs from threat reports.
arXiv Detail & Related papers (2024-08-11T19:59:08Z) - Pre-trained Trojan Attacks for Visual Recognition [106.13792185398863]
Pre-trained vision models (PVMs) have become a dominant component due to their exceptional performance when fine-tuned for downstream tasks.
We propose the Pre-trained Trojan attack, which embeds backdoors into a PVM, enabling attacks across various downstream vision tasks.
We highlight the challenges posed by cross-task activation and shortcut connections in successful backdoor attacks.
arXiv Detail & Related papers (2023-12-23T05:51:40Z) - Streamlining Attack Tree Generation: A Fragment-Based Approach [39.157069600312774]
We present a novel fragment-based attack graph generation approach that utilizes information from publicly available information security databases.
We also propose a domain-specific language for attack modeling, which we employ in the proposed attack graph generation approach.
arXiv Detail & Related papers (2023-10-01T12:41:38Z) - PRAT: PRofiling Adversarial aTtacks [52.693011665938734]
We introduce a novel problem of PRofiling Adversarial aTtacks (PRAT)
Given an adversarial example, the objective of PRAT is to identify the attack used to generate it.
We use AID to devise a novel framework for the PRAT objective.
arXiv Detail & Related papers (2023-09-20T07:42:51Z) - ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox
Generative Model Trigger [11.622811907571132]
Textual backdoor attacks pose a practical threat to existing systems.
With cutting-edge generative models such as GPT-4 pushing rewriting to extraordinary levels, such attacks are becoming even harder to detect.
We conduct a comprehensive investigation of the role of black-box generative models as a backdoor attack tool, highlighting the importance of researching relative defense strategies.
arXiv Detail & Related papers (2023-04-27T19:26:25Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Projective Ranking-based GNN Evasion Attacks [52.85890533994233]
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks.
GNNs are at risk of adversarial attacks.
arXiv Detail & Related papers (2022-02-25T21:52:09Z) - SAGE: Intrusion Alert-driven Attack Graph Extractor [4.530678016396476]
Attack graphs (AGs) are used to assess pathways availed by cyber adversaries to penetrate a network.
We propose to automatically learn AGs based on actions observed through intrusion alerts, without prior expert knowledge.
arXiv Detail & Related papers (2021-07-06T17:45:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.