APT-MCL: An Adaptive APT Detection System Based on Multi-View Collaborative Provenance Graph Learning
- URL: http://arxiv.org/abs/2601.08328v1
- Date: Tue, 13 Jan 2026 08:30:43 GMT
- Title: APT-MCL: An Adaptive APT Detection System Based on Multi-View Collaborative Provenance Graph Learning
- Authors: Mingqi Lv, Shanshan Zhang, Haiwen Liu, Tieming Chen, Tiantian Zhu,
- Abstract summary: Advanced persistent threats (APTs) are stealthy and multi-stage, making single-point defenses ill-suited to capture long-range and cross-entity attack semantics.<n>This paper proposes APT-MCL, an intelligent APT detection system based on Multi-view Collaborative graph Learning.
- Score: 14.65353464010361
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advanced persistent threats (APTs) are stealthy and multi-stage, making single-point defenses (e.g., malware- or traffic-based detectors) ill-suited to capture long-range and cross-entity attack semantics. Provenance-graph analysis has become a prominent approach for APT detection. However, its practical deployment is hampered by (i) the scarcity of APT samples, (ii) the cost and difficulty of fine-grained APT sample labeling, and (iii) the diversity of attack tactics and techniques. Aiming at these problems, this paper proposes APT-MCL, an intelligent APT detection system based on Multi-view Collaborative provenance graph Learning. It adopts an unsupervised learning strategy to discover APT attacks at the node level via anomaly detection. After that, it creates multiple anomaly detection sub-models based on multi-view features and integrates them within a collaborative learning framework to adapt to diverse attack scenarios. Extensive experiments on three real-world APT datasets validate the approach: (i) multi-view features improve cross-scenario generalization, and (ii) co-training substantially boosts node-level detection under label scarcity, enabling practical deployment on diverse attack scenarios.
Related papers
- Semantic-Aware Advanced Persistent Threat Detection Using Autoencoders on LLM-Encoded System Logs [0.7611870296994722]
Advanced Persistent Threats (APTs) are among the most challenging cyberattacks to detect.<n>Traditional statistical methods and shallow machine learning techniques often fail to detect them.<n>This paper proposes a novel anomaly detection approach that leverages semantic embeddings.
arXiv Detail & Related papers (2026-01-30T12:38:12Z) - A Study on the Importance of Features in Detecting Advanced Persistent Threats Using Machine Learning [6.144680854063938]
Advanced Persistent Threats (APTs) pose a significant security risk to organizations and industries.<n>Mitigating these sophisticated attacks is highly challenging due to the stealthy and persistent nature of APTs.<n>This paper aims to analyze measurements considered when recording network traffic and conclude which features contribute more to detecting APT samples.
arXiv Detail & Related papers (2025-02-11T03:06:03Z) - Learning in Multiple Spaces: Few-Shot Network Attack Detection with Metric-Fused Prototypical Networks [47.18575262588692]
We propose a novel Multi-Space Prototypical Learning framework tailored for few-shot attack detection.<n>By leveraging Polyak-averaged prototype generation, the framework stabilizes the learning process and effectively adapts to rare and zero-day attacks.<n> Experimental results on benchmark datasets demonstrate that MSPL outperforms traditional approaches in detecting low-profile and novel attack types.
arXiv Detail & Related papers (2024-12-28T00:09:46Z) - A Federated Learning Approach for Multi-stage Threat Analysis in Advanced Persistent Threat Campaigns [25.97800399318373]
Multi-stage threats like advanced persistent threats (APT) pose severe risks by stealing data and destroying infrastructure.
APTs use novel attack vectors and evade signature-based detection by obfuscating their network presence.
This paper proposes a novel 3-phase unsupervised federated learning (FL) framework to detect APTs.
arXiv Detail & Related papers (2024-06-19T03:34:41Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - TREC: APT Tactic / Technique Recognition via Few-Shot Provenance Subgraph Learning [31.959092032106472]
We propose TREC, the first attempt to recognize APT tactics from provenance graphs by exploiting deep learning techniques.
To address the "needle in a haystack" problem, TREC segments small and compact subgraphs from a large provenance graph.
We evaluate TREC based on a customized dataset collected and made public by our team.
arXiv Detail & Related papers (2024-02-23T07:05:32Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - Text generation for dataset augmentation in security classification
tasks [55.70844429868403]
This study evaluates the application of natural language text generators to fill this data gap in multiple security-related text classification tasks.
We find substantial benefits for GPT-3 data augmentation strategies in situations with severe limitations on known positive-class samples.
arXiv Detail & Related papers (2023-10-22T22:25:14Z) - When Measures are Unreliable: Imperceptible Adversarial Perturbations
toward Top-$k$ Multi-Label Learning [83.8758881342346]
A novel loss function is devised to generate adversarial perturbations that could achieve both visual and measure imperceptibility.
Experiments on large-scale benchmark datasets demonstrate the superiority of our proposed method in attacking the top-$k$ multi-label systems.
arXiv Detail & Related papers (2023-07-27T13:18:47Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.