That Escalated Quickly: An ML Framework for Alert Prioritization
- URL: http://arxiv.org/abs/2302.06648v2
- Date: Wed, 15 Feb 2023 18:31:48 GMT
- Title: That Escalated Quickly: An ML Framework for Alert Prioritization
- Authors: Ben Gelman, Salma Taoufiq, Tam\'as V\"or\"os, Konstantin Berlin
- Abstract summary: We present That Escalated Quickly (TEQ), a machine learning framework that reduces alert fatigue with minimal changes to SOC.
On real-world data, the system is able to reduce the time it takes to respond to actionable incidents by $22.9%$, suppress $54%$ of false positives with a $95.1%$ detection rate, and reduce the number of alerts an analyst needs to investigate within singular incidents by $14%$.
- Score: 2.5845893156827158
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In place of in-house solutions, organizations are increasingly moving towards
managed services for cyber defense. Security Operations Centers are specialized
cybersecurity units responsible for the defense of an organization, but the
large-scale centralization of threat detection is causing SOCs to endure an
overwhelming amount of false positive alerts -- a phenomenon known as alert
fatigue. Large collections of imprecise sensors, an inability to adapt to known
false positives, evolution of the threat landscape, and inefficient use of
analyst time all contribute to the alert fatigue problem. To combat these
issues, we present That Escalated Quickly (TEQ), a machine learning framework
that reduces alert fatigue with minimal changes to SOC workflows by predicting
alert-level and incident-level actionability. On real-world data, the system is
able to reduce the time it takes to respond to actionable incidents by
$22.9\%$, suppress $54\%$ of false positives with a $95.1\%$ detection rate,
and reduce the number of alerts an analyst needs to investigate within singular
incidents by $14\%$.
Related papers
- The potential of LLM-generated reports in DevSecOps [3.4888132404740797]
Alert fatigue is a common issue faced by software teams using the DevSecOps paradigm.
This paper explores the potential of LLMs in generating actionable security reports.
Integrating these reports into DevSecOps can mitigate attention saturation and alert fatigue.
arXiv Detail & Related papers (2024-10-02T18:01:12Z) - Forecasting Attacker Actions using Alert-driven Attack Graphs [1.3812010983144802]
This paper builds an action forecasting capability on top of the alert-driven AG framework for predicting the next likely attacker action.
We also modify the framework to build AGs in real time, as new alerts are triggered.
This way, we convert alert-driven AGs into an early warning system that enables analysts circumvent ongoing attacks and break the cyber killchain.
arXiv Detail & Related papers (2024-08-19T11:04:47Z) - Carbon Filter: Real-time Alert Triage Using Large Scale Clustering and Fast Search [6.830322979559498]
"Alert fatigue" is one of the biggest challenges faced by the Security Operations Center (SOC) today.
We present Carbon Filter, a statistical learning based system that dramatically reduces the number of alerts analysts need to manually review.
arXiv Detail & Related papers (2024-05-07T22:06:24Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - You Cannot Escape Me: Detecting Evasions of SIEM Rules in Enterprise Networks [2.310746340159112]
We present AMIDES, an open-source proof-of-concept adaptive misuse detection system.
We show that AMIDES successfully detects a majority of these evasions without any false alerts.
arXiv Detail & Related papers (2023-11-16T21:05:12Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Sample-Efficient Safety Assurances using Conformal Prediction [57.92013073974406]
Early warning systems can provide alerts when an unsafe situation is imminent.
To reliably improve safety, these warning systems should have a provable false negative rate.
We present a framework that combines a statistical inference technique known as conformal prediction with a simulator of robot/environment dynamics.
arXiv Detail & Related papers (2021-09-28T23:00:30Z) - SAGE: Intrusion Alert-driven Attack Graph Extractor [4.530678016396476]
Attack graphs (AGs) are used to assess pathways availed by cyber adversaries to penetrate a network.
We propose to automatically learn AGs based on actions observed through intrusion alerts, without prior expert knowledge.
arXiv Detail & Related papers (2021-07-06T17:45:02Z) - A System for Efficiently Hunting for Cyber Threats in Computer Systems
Using Threat Intelligence [78.23170229258162]
We build ThreatRaptor, a system that facilitates cyber threat hunting in computer systems using OSCTI.
ThreatRaptor provides (1) an unsupervised, light-weight, and accurate NLP pipeline that extracts structured threat behaviors from unstructured OSCTI text, (2) a concise and expressive domain-specific query language, TBQL, to hunt for malicious system activities, and (3) a query synthesis mechanism that automatically synthesizes a TBQL query from the extracted threat behaviors.
arXiv Detail & Related papers (2021-01-17T19:44:09Z) - Enabling Efficient Cyber Threat Hunting With Cyber Threat Intelligence [94.94833077653998]
ThreatRaptor is a system that facilitates threat hunting in computer systems using open-source Cyber Threat Intelligence (OSCTI)
It extracts structured threat behaviors from unstructured OSCTI text and uses a concise and expressive domain-specific query language, TBQL, to hunt for malicious system activities.
Evaluations on a broad set of attack cases demonstrate the accuracy and efficiency of ThreatRaptor in practical threat hunting.
arXiv Detail & Related papers (2020-10-26T14:54:01Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.