Detecting APT Malware Command and Control over HTTP(S) Using Contextual Summaries
- URL: http://arxiv.org/abs/2502.05367v1
- Date: Fri, 07 Feb 2025 22:38:39 GMT
- Title: Detecting APT Malware Command and Control over HTTP(S) Using Contextual Summaries
- Authors: Almuthanna Alageel, Sergio Maffeis, Imperial College London,
- Abstract summary: We present EarlyCrow, an approach to detect APT malware command and control over HTTP(S) using contextual summaries.<n>The design of EarlyCrow is informed by a novel threat model focused on TTPs present in traffic generated by tools recently used in APT campaigns.<n>EarlyCrow defines a novel multipurpose network flow format called PairFlow, which is leveraged to build the contextual summary of a PCAP capture.
- Score: 1.0787328610467801
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advanced Persistent Threats (APTs) are among the most sophisticated threats facing critical organizations worldwide. APTs employ specific tactics, techniques, and procedures (TTPs) which make them difficult to detect in comparison to frequent and aggressive attacks. In fact, current network intrusion detection systems struggle to detect APTs communications, allowing such threats to persist unnoticed on victims' machines for months or even years. In this paper, we present EarlyCrow, an approach to detect APT malware command and control over HTTP(S) using contextual summaries. The design of EarlyCrow is informed by a novel threat model focused on TTPs present in traffic generated by tools recently used as part of APT campaigns. The threat model highlights the importance of the context around the malicious connections, and suggests traffic attributes which help APT detection. EarlyCrow defines a novel multipurpose network flow format called PairFlow, which is leveraged to build the contextual summary of a PCAP capture, representing key behavioral, statistical and protocol information relevant to APT TTPs. We evaluate the effectiveness of EarlyCrow on unseen APTs obtaining a headline macro average F1-score of 93.02% with FPR of $0.74%.
Related papers
- A Lightweight IDS for Early APT Detection Using a Novel Feature Selection Method [0.0]
An Advanced Persistent Threat (APT) is a multistage, highly sophisticated, and covert form of cyber threat.<n>We propose a feature selection method for developing a lightweight intrusion detection system.
arXiv Detail & Related papers (2025-06-13T09:07:56Z) - R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning [97.49610356913874]
We propose a robust test-time prompt tuning (R-TPT) for vision-language models (VLMs)
R-TPT mitigates the impact of adversarial attacks during the inference stage.
We introduce a plug-and-play reliability-based weighted ensembling strategy to strengthen the defense.
arXiv Detail & Related papers (2025-04-15T13:49:31Z) - Investigation of Advanced Persistent Threats Network-based Tactics, Techniques and Procedures [1.0787328610467801]
This study examines the evolution of Advanced Persistent Threats (APTs) over 22 years.
We focus on their evasion techniques and how they stay undetected for months or years.
The most popular protocol to deploy evasion techniques is using HTTP(S) with 81% of APT campaigns.
arXiv Detail & Related papers (2025-02-12T22:38:50Z) - TAPT: Test-Time Adversarial Prompt Tuning for Robust Inference in Vision-Language Models [53.91006249339802]
We propose a novel defense method called Test-Time Adversarial Prompt Tuning (TAPT) to enhance the inference robustness of CLIP against visual adversarial attacks.
TAPT is a test-time defense method that learns defensive bimodal (textual and visual) prompts to robustify the inference process of CLIP.
We evaluate the effectiveness of TAPT on 11 benchmark datasets, including ImageNet and 10 other zero-shot datasets.
arXiv Detail & Related papers (2024-11-20T08:58:59Z) - Chasing the Shadows: TTPs in Action to Attribute Advanced Persistent Threats [3.2183320563774833]
This research aims to assist the threat analyst in the attribution process by presenting an attribution method named CAPTAIN.
The proposed approach outperforms traditional similarity measures like Cosine, Euclidean, and Longest Common Subsequence.
Overall, CAPTAIN performs attribution with the precision of 61.36% (top-1) and 69.98% (top-2), surpassing the existing state-of-the-art attribution methods.
arXiv Detail & Related papers (2024-09-24T18:59:27Z) - AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - A Federated Learning Approach for Multi-stage Threat Analysis in Advanced Persistent Threat Campaigns [25.97800399318373]
Multi-stage threats like advanced persistent threats (APT) pose severe risks by stealing data and destroying infrastructure.
APTs use novel attack vectors and evade signature-based detection by obfuscating their network presence.
This paper proposes a novel 3-phase unsupervised federated learning (FL) framework to detect APTs.
arXiv Detail & Related papers (2024-06-19T03:34:41Z) - TREC: APT Tactic / Technique Recognition via Few-Shot Provenance Subgraph Learning [31.959092032106472]
We propose TREC, the first attempt to recognize APT tactics from provenance graphs by exploiting deep learning techniques.
To address the "needle in a haystack" problem, TREC segments small and compact subgraphs from a large provenance graph.
We evaluate TREC based on a customized dataset collected and made public by our team.
arXiv Detail & Related papers (2024-02-23T07:05:32Z) - Token-Level Adversarial Prompt Detection Based on Perplexity Measures
and Contextual Information [67.78183175605761]
Large Language Models are susceptible to adversarial prompt attacks.
This vulnerability underscores a significant concern regarding the robustness and reliability of LLMs.
We introduce a novel approach to detecting adversarial prompts at a token level.
arXiv Detail & Related papers (2023-11-20T03:17:21Z) - CVE-driven Attack Technique Prediction with Semantic Information
Extraction and a Domain-specific Language Model [2.1756081703276]
The paper introduces the TTPpredictor tool, which uses innovative techniques to analyze CVE descriptions and infer plausible TTP attacks resulting from CVE exploitation.
TTPpredictor overcomes challenges posed by limited labeled data and semantic disparities between CVE and TTP descriptions.
The paper presents an empirical assessment, demonstrating TTPpredictor's effectiveness with accuracy rates of approximately 98% and F1-scores ranging from 95% to 98% in precise CVE classification to ATT&CK techniques.
arXiv Detail & Related papers (2023-09-06T06:53:45Z) - Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language
Models [107.05966685291067]
We propose test-time prompt tuning (TPT) to learn adaptive prompts on the fly with a single test sample.
TPT improves the zero-shot top-1 accuracy of CLIP by 3.6% on average.
In evaluating cross-dataset generalization with unseen categories, TPT performs on par with the state-of-the-art approaches that use additional training data.
arXiv Detail & Related papers (2022-09-15T17:55:11Z) - Towards Automated Classification of Attackers' TTPs by combining NLP
with ML Techniques [77.34726150561087]
We evaluate and compare different Natural Language Processing (NLP) and machine learning techniques used for security information extraction in research.
Based on our investigations we propose a data processing pipeline that automatically classifies unstructured text according to attackers' tactics and techniques.
arXiv Detail & Related papers (2022-07-18T09:59:21Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.