Chasing the Shadows: TTPs in Action to Attribute Advanced Persistent Threats
- URL: http://arxiv.org/abs/2409.16400v1
- Date: Tue, 24 Sep 2024 18:59:27 GMT
- Title: Chasing the Shadows: TTPs in Action to Attribute Advanced Persistent Threats
- Authors: Nanda Rani, Bikash Saha, Vikas Maurya, Sandeep Kumar Shukla,
- Abstract summary: This research aims to assist the threat analyst in the attribution process by presenting an attribution method named CAPTAIN.
The proposed approach outperforms traditional similarity measures like Cosine, Euclidean, and Longest Common Subsequence.
Overall, CAPTAIN performs attribution with the precision of 61.36% (top-1) and 69.98% (top-2), surpassing the existing state-of-the-art attribution methods.
- Score: 3.2183320563774833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The current state of Advanced Persistent Threats (APT) attribution primarily relies on time-consuming manual processes. These include mapping incident artifacts onto threat attribution frameworks and employing expert reasoning to uncover the most likely responsible APT groups. This research aims to assist the threat analyst in the attribution process by presenting an attribution method named CAPTAIN (Comprehensive Advanced Persistent Threat AttrIbutioN). This novel APT attribution approach leverages the Tactics, Techniques, and Procedures (TTPs) employed by various APT groups in past attacks. CAPTAIN follows two significant development steps: baseline establishment and similarity measure for attack pattern matching. This method starts by maintaining a TTP database of APTs seen in past attacks as baseline behaviour of threat groups. The attribution process leverages the contextual information added by TTP sequences, which reflects the sequence of behaviours threat actors demonstrated during the attack on different kill-chain stages. Then, it compares the provided TTPs with established baseline to identify the most closely matching threat group. CAPTAIN introduces a novel similarity measure for APT group attack-pattern matching that calculates the similarity between TTP sequences. The proposed approach outperforms traditional similarity measures like Cosine, Euclidean, and Longest Common Subsequence (LCS) in performing attribution. Overall, CAPTAIN performs attribution with the precision of 61.36% (top-1) and 69.98% (top-2), surpassing the existing state-of-the-art attribution methods.
Related papers
- On Technique Identification and Threat-Actor Attribution using LLMs and Embedding Models [37.81839740673437]
This research evaluates large language models (LLMs) for cyber-attack attribution based on behavioral indicators extracted from forensic documentation.<n>Our framework then identifies TTPs from text using vector embedding search and builds profiles to attribute new attacks for a machine learning model to learn.
arXiv Detail & Related papers (2025-05-15T04:14:29Z) - R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning [97.49610356913874]
We propose a robust test-time prompt tuning (R-TPT) for vision-language models (VLMs)
R-TPT mitigates the impact of adversarial attacks during the inference stage.
We introduce a plug-and-play reliability-based weighted ensembling strategy to strengthen the defense.
arXiv Detail & Related papers (2025-04-15T13:49:31Z) - Detecting APT Malware Command and Control over HTTP(S) Using Contextual Summaries [1.0787328610467801]
We present EarlyCrow, an approach to detect APT malware command and control over HTTP(S) using contextual summaries.
The design of EarlyCrow is informed by a novel threat model focused on TTPs present in traffic generated by tools recently used in APT campaigns.
EarlyCrow defines a novel multipurpose network flow format called PairFlow, which is leveraged to build the contextual summary of a PCAP capture.
arXiv Detail & Related papers (2025-02-07T22:38:39Z) - TAPT: Test-Time Adversarial Prompt Tuning for Robust Inference in Vision-Language Models [53.91006249339802]
We propose a novel defense method called Test-Time Adversarial Prompt Tuning (TAPT) to enhance the inference robustness of CLIP against visual adversarial attacks.
TAPT is a test-time defense method that learns defensive bimodal (textual and visual) prompts to robustify the inference process of CLIP.
We evaluate the effectiveness of TAPT on 11 benchmark datasets, including ImageNet and 10 other zero-shot datasets.
arXiv Detail & Related papers (2024-11-20T08:58:59Z) - A Cascade Approach for APT Campaign Attribution in System Event Logs: Technique Hunting and Subgraph Matching [1.0928166738710612]
This study addresses the challenge of identifying APT campaign attacks through system event logs.
A cascading approach, name SFM, combines Technique hunting and APT campaign attribution.
arXiv Detail & Related papers (2024-10-29T23:49:28Z) - AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Advancing Generalized Transfer Attack with Initialization Derived Bilevel Optimization and Dynamic Sequence Truncation [49.480978190805125]
Transfer attacks generate significant interest for black-box applications.
Existing works essentially directly optimize the single-level objective w.r.t. surrogate model.
We propose a bilevel optimization paradigm, which explicitly reforms the nested relationship between the Upper-Level (UL) pseudo-victim attacker and the Lower-Level (LL) surrogate attacker.
arXiv Detail & Related papers (2024-06-04T07:45:27Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - TREC: APT Tactic / Technique Recognition via Few-Shot Provenance Subgraph Learning [31.959092032106472]
We propose TREC, the first attempt to recognize APT tactics from provenance graphs by exploiting deep learning techniques.
To address the "needle in a haystack" problem, TREC segments small and compact subgraphs from a large provenance graph.
We evaluate TREC based on a customized dataset collected and made public by our team.
arXiv Detail & Related papers (2024-02-23T07:05:32Z) - PRAT: PRofiling Adversarial aTtacks [52.693011665938734]
We introduce a novel problem of PRofiling Adversarial aTtacks (PRAT)
Given an adversarial example, the objective of PRAT is to identify the attack used to generate it.
We use AID to devise a novel framework for the PRAT objective.
arXiv Detail & Related papers (2023-09-20T07:42:51Z) - CVE-driven Attack Technique Prediction with Semantic Information
Extraction and a Domain-specific Language Model [2.1756081703276]
The paper introduces the TTPpredictor tool, which uses innovative techniques to analyze CVE descriptions and infer plausible TTP attacks resulting from CVE exploitation.
TTPpredictor overcomes challenges posed by limited labeled data and semantic disparities between CVE and TTP descriptions.
The paper presents an empirical assessment, demonstrating TTPpredictor's effectiveness with accuracy rates of approximately 98% and F1-scores ranging from 95% to 98% in precise CVE classification to ATT&CK techniques.
arXiv Detail & Related papers (2023-09-06T06:53:45Z) - From Threat Reports to Continuous Threat Intelligence: A Comparison of
Attack Technique Extraction Methods from Textual Artifacts [11.396560798899412]
Threat reports contain detailed descriptions of attack Tactics, Techniques, and Procedures (TTP) written in an unstructured text format.
TTP extraction methods are proposed in the literature, but not all of these methods are compared to one another or to a baseline.
In this work, we identify ten existing TTP extraction studies from the literature and implement five methods from the ten studies.
We find two methods, based on Term Frequency-Inverse Document Frequency(TFIDF) and Latent Semantic Indexing (LSI), outperform the other three methods with a F1 score of 84% and 83%,
arXiv Detail & Related papers (2022-10-05T23:21:41Z) - CARBEN: Composite Adversarial Robustness Benchmark [70.05004034081377]
This paper demonstrates how composite adversarial attack (CAA) affects the resulting image.
It provides real-time inferences of different models, which will facilitate users' configuration of the parameters of the attack level.
A leaderboard to benchmark adversarial robustness against CAA is also introduced.
arXiv Detail & Related papers (2022-07-16T01:08:44Z) - Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests [73.32304304788838]
This paper systematically uncovers the failure mode of non-parametric TSTs through adversarial attacks.
To enable TST-agnostic attacks, we propose an ensemble attack framework that jointly minimizes the different types of test criteria.
To robustify TSTs, we propose a max-min optimization that iteratively generates adversarial pairs to train the deep kernels.
arXiv Detail & Related papers (2022-02-07T11:18:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.