TPPR: APT Tactic / Technique Pattern Guided Attack Path Reasoning for Attack Investigation
- URL: http://arxiv.org/abs/2510.22191v1
- Date: Sat, 25 Oct 2025 07:13:07 GMT
- Title: TPPR: APT Tactic / Technique Pattern Guided Attack Path Reasoning for Attack Investigation
- Authors: Qi Sheng,
- Abstract summary: We propose TPPR, a novel framework that first extracts anomaly subgraphs through abnormal node detection, TTP-annotation and graph pruning.<n> TPPR's capability to achieve 99.9% graph simplification (700,000 to 20 edges) while preserving 91% of critical attack nodes, outperforming state-of-the-art solutions (SPARSE, DepImpact) by 63.1% and 67.9% in reconstruction precision while maintaining attack scenario integrity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Provenance analysis based on system audit data has emerged as a fundamental approach for investigating Advanced Persistent Threat (APT) attacks. Due to the high concealment and long-term persistence of APT attacks, they are only represented as a minimal part of the critical path in the provenance graph. While existing techniques employ behavioral pattern matching and data flow feature matching to uncover latent associations in attack sequences through provenance graph path reasoning, their inability to establish effective attack context associations often leads to the conflation of benign system operations with real attack entities, that fail to accurately characterize real APT behaviors. We observe that while the causality of entities in the provenance graph exhibit substantial complexity, attackers often follow specific attack patterns-specifically, clear combinations of tactics and techniques to achieve their goals. Based on these insights, we propose TPPR, a novel framework that first extracts anomaly subgraphs through abnormal node detection, TTP-annotation and graph pruning, then performs attack path reasoning using mined TTP sequential pattern, and finally reconstructs attack scenarios through confidence-based path scoring and merging. Extensive evaluation on real enterprise logs (more than 100 million events) and DARPA TC dataset demonstrates TPPR's capability to achieve 99.9% graph simplification (700,000 to 20 edges) while preserving 91% of critical attack nodes, outperforming state-of-the-art solutions (SPARSE, DepImpact) by 63.1% and 67.9% in reconstruction precision while maintaining attack scenario integrity.
Related papers
- The Eminence in Shadow: Exploiting Feature Boundary Ambiguity for Robust Backdoor Attacks [51.468144272905135]
Deep neural networks (DNNs) underpin critical applications yet remain vulnerable to backdoor attacks.<n>We provide a theoretical analysis targeting backdoor attacks, focusing on how sparse decision boundaries enable disproportionate model manipulation.<n>We propose Eminence, an explainable and robust black-box backdoor framework with provable theoretical guarantees and inherent stealth properties.
arXiv Detail & Related papers (2025-12-11T08:09:07Z) - An Automated Attack Investigation Approach Leveraging Threat-Knowledge-Augmented Large Language Models [17.220143037047627]
Advanced Persistent Threats (APTs) compromise high-value systems to steal data or disrupt operations.<n>Existing methods suffer from poor platform generality, limited generalization to evolving tactics, and an inability to produce analyst-ready reports.<n>We present an LLM-empowered attack investigation framework augmented with a dynamically adaptable Kill-Chain-aligned threat knowledge base.
arXiv Detail & Related papers (2025-09-01T08:57:01Z) - R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning [69.72249695674665]
We propose a robust test-time prompt tuning (R-TPT) for vision-language models (VLMs)<n>R-TPT mitigates the impact of adversarial attacks during the inference stage.<n>We introduce a plug-and-play reliability-based weighted ensembling strategy to strengthen the defense.
arXiv Detail & Related papers (2025-04-15T13:49:31Z) - TFLAG:Towards Practical APT Detection via Deviation-Aware Learning on Temporal Provenance Graph [7.676383564795898]
Advanced Persistent Threat (APT) have grown increasingly complex and concealed.<n>Recent studies have incorporated graph learning techniques to extract detailed information from provenance graphs.<n>We introduce TFLAG, an advanced anomaly detection framework.
arXiv Detail & Related papers (2025-01-13T01:08:06Z) - Slot: Provenance-Driven APT Detection through Graph Reinforcement Learning [24.84110719035862]
Advanced Persistent Threats (APTs) represent sophisticated cyberattacks characterized by their ability to remain undetected for extended periods.<n>We propose Slot, an advanced APT detection approach based on provenance graphs and graph reinforcement learning.<n>We show Slot's outstanding accuracy, efficiency, adaptability, and robustness in APT detection, with most metrics surpassing state-of-the-art methods.
arXiv Detail & Related papers (2024-10-23T14:28:32Z) - SPARSE: Semantic Tracking and Path Analysis for Attack Investigation in Real-time [7.477027371128296]
We propose SPARSE, an efficient system for constructing critical component graphs (i.e., consisting of critical events) from streaming logs.
Our evaluation on a real large-scale attack dataset shows that SPARSE can generate a critical component graph ( 113 edges) in 1.6 seconds.
SPARSE is 25 X more effective than other state-of-the-art techniques in filtering irrelevant edges.
arXiv Detail & Related papers (2024-05-04T10:19:28Z) - NODLINK: An Online System for Fine-Grained APT Attack Detection and Investigation [15.803901489811318]
NodLink is the first online detection system that maintains high detection accuracy without sacrificing detection granularity.
We propose a novel design of in-memory cache, an efficient attack screening method, and a new approximation algorithm that is more efficient than the conventional one in APT attack detection.
arXiv Detail & Related papers (2023-11-04T05:36:59Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests [73.32304304788838]
This paper systematically uncovers the failure mode of non-parametric TSTs through adversarial attacks.
To enable TST-agnostic attacks, we propose an ensemble attack framework that jointly minimizes the different types of test criteria.
To robustify TSTs, we propose a max-min optimization that iteratively generates adversarial pairs to train the deep kernels.
arXiv Detail & Related papers (2022-02-07T11:18:04Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.