Looking Beyond IoCs: Automatically Extracting Attack Patterns from
External CTI
- URL: http://arxiv.org/abs/2211.01753v2
- Date: Tue, 11 Jul 2023 22:46:12 GMT
- Title: Looking Beyond IoCs: Automatically Extracting Attack Patterns from
External CTI
- Authors: Md Tanvirul Alam, Dipkamal Bhusal, Youngja Park and Nidhi Rastogi
- Abstract summary: LADDER is a framework that can extract text-based attack patterns from cyberthreat intelligence reports at scale.
We present several use cases to demonstrate the application of LADDER in real-world scenarios.
- Score: 3.871148938060281
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Public and commercial organizations extensively share cyberthreat
intelligence (CTI) to prepare systems to defend against existing and emerging
cyberattacks. However, traditional CTI has primarily focused on tracking known
threat indicators such as IP addresses and domain names, which may not provide
long-term value in defending against evolving attacks. To address this
challenge, we propose to use more robust threat intelligence signals called
attack patterns. LADDER is a knowledge extraction framework that can extract
text-based attack patterns from CTI reports at scale. The framework
characterizes attack patterns by capturing the phases of an attack in Android
and enterprise networks and systematically maps them to the MITRE ATT\&CK
pattern framework. LADDER can be used by security analysts to determine the
presence of attack vectors related to existing and emerging threats, enabling
them to prepare defenses proactively. We also present several use cases to
demonstrate the application of LADDER in real-world scenarios. Finally, we
provide a new, open-access benchmark malware dataset to train future
cyberthreat intelligence models.
Related papers
- Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - Cyber Knowledge Completion Using Large Language Models [1.4883782513177093]
Integrating the Internet of Things (IoT) into Cyber-Physical Systems (CPSs) has expanded their cyber-attack surface.
Assessing the risks of CPSs is increasingly difficult due to incomplete and outdated cybersecurity knowledge.
Recent advancements in Large Language Models (LLMs) present a unique opportunity to enhance cyber-attack knowledge completion.
arXiv Detail & Related papers (2024-09-24T15:20:39Z) - Using Retriever Augmented Large Language Models for Attack Graph Generation [0.7619404259039284]
This paper explores the approach of leveraging large language models (LLMs) to automate the generation of attack graphs.
It shows how to utilize Common Vulnerabilities and Exposures (CommonLLMs) to create attack graphs from threat reports.
arXiv Detail & Related papers (2024-08-11T19:59:08Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - A Dual-Tier Adaptive One-Class Classification IDS for Emerging Cyberthreats [3.560574387648533]
We propose a one-class classification-driven IDS system structured on two tiers.
The first tier distinguishes between normal activities and attacks/threats, while the second tier determines if the detected attack is known or unknown.
This model not only identifies unseen attacks but also uses them for retraining them by clustering unseen attacks.
arXiv Detail & Related papers (2024-03-17T12:26:30Z) - Mining Temporal Attack Patterns from Cyberthreat Intelligence Reports [9.589390721223147]
Defending from cyberattacks requires practitioners to operate on high-level adversary behavior.
We propose ChronoCTI, an automated pipeline for mining temporal attack patterns from cyberthreat intelligence (CTI) reports.
arXiv Detail & Related papers (2024-01-03T18:53:22Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Towards Automated Classification of Attackers' TTPs by combining NLP
with ML Techniques [77.34726150561087]
We evaluate and compare different Natural Language Processing (NLP) and machine learning techniques used for security information extraction in research.
Based on our investigations we propose a data processing pipeline that automatically classifies unstructured text according to attackers' tactics and techniques.
arXiv Detail & Related papers (2022-07-18T09:59:21Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.