Attack Effect Model based Malicious Behavior Detection
- URL: http://arxiv.org/abs/2506.05001v1
- Date: Thu, 05 Jun 2025 13:10:58 GMT
- Title: Attack Effect Model based Malicious Behavior Detection
- Authors: Limin Wang, Lei Bu, Muzimiao Zhang, Shihong Cang, Kai Ye,
- Abstract summary: We present FEAD (Focus-Enhanced Attack Detection), a framework that addresses the three key challenges of traditional security detection methods.<n>We present an attack model-driven approach that extracts security-critical monitoring items from online attack reports for comprehensive coverage.<n>We also show that FEAD achieves 8.23% higher F1-score than existing solutions with only 5.4% overhead.
- Score: 14.402324888945815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional security detection methods face three key challenges: inadequate data collection that misses critical security events, resource-intensive monitoring systems, and poor detection algorithms with high false positive rates. We present FEAD (Focus-Enhanced Attack Detection), a framework that addresses these issues through three innovations: (1) an attack model-driven approach that extracts security-critical monitoring items from online attack reports for comprehensive coverage; (2) efficient task decomposition that optimally distributes monitoring across existing collectors to minimize overhead; and (3) locality-aware anomaly analysis that leverages the clustering behavior of malicious activities in provenance graphs to improve detection accuracy. Evaluations demonstrate FEAD achieves 8.23% higher F1-score than existing solutions with only 5.4% overhead, confirming that focus-based designs significantly enhance detection performance.
Related papers
- Contrastive-KAN: A Semi-Supervised Intrusion Detection Framework for Cybersecurity with scarce Labeled Data [0.0]
We propose a real-time intrusion detection system based on a semi-supervised contrastive learning framework using the Kolmogorov-Arnold Network (KAN)<n>Our method leverages abundant unlabeled data to effectively distinguish between normal and attack behaviors.<n>We validate our approach on three benchmark datasets, UNSW-NB15, BoT-IoT, and Gas Pipeline, using only 2.20%, 1.28%, and 8% of labeled samples, respectively.
arXiv Detail & Related papers (2025-07-14T21:02:34Z) - It Only Gets Worse: Revisiting DL-Based Vulnerability Detectors from a Practical Perspective [14.271145160443462]
VulTegra compares scratch-trained and pre-trained DL models for vulnerability detection.<n>State-of-the-art (SOTA) detectors still suffer from low consistency, limited real-world capabilities, and scalability challenges.
arXiv Detail & Related papers (2025-07-13T08:02:56Z) - Backdoor Cleaning without External Guidance in MLLM Fine-tuning [76.82121084745785]
Believe Your Eyes (BYE) is a data filtering framework that leverages attention entropy patterns as self-supervised signals to identify and filter backdoor samples.<n>It achieves near-zero attack success rates while maintaining clean-task performance.
arXiv Detail & Related papers (2025-05-22T17:11:58Z) - Lie Detector: Unified Backdoor Detection via Cross-Examination Framework [68.45399098884364]
We propose a unified backdoor detection framework in the semi-honest setting.<n>Our method achieves superior detection performance, improving accuracy by 5.4%, 1.6%, and 11.9% over SoTA baselines.<n> Notably, it is the first to effectively detect backdoors in multimodal large language models.
arXiv Detail & Related papers (2025-03-21T06:12:06Z) - Robust Distribution Alignment for Industrial Anomaly Detection under Distribution Shift [51.24522135151649]
Anomaly detection plays a crucial role in quality control for industrial applications.<n>Existing methods attempt to address domain shifts by training generalizable models.<n>Our proposed method demonstrates superior results compared with state-of-the-art anomaly detection and domain adaptation methods.
arXiv Detail & Related papers (2025-03-19T05:25:52Z) - AnywhereDoor: Multi-Target Backdoor Attacks on Object Detection [9.539021752700823]
AnywhereDoor is a multi-target backdoor attack for object detection.<n>It allows adversaries to make objects disappear, fabricate new ones or mislabel them, either across all object classes or specific ones.<n>It improves attack success rates by 26% compared to adaptations of existing methods for such flexible control.
arXiv Detail & Related papers (2025-03-09T09:24:24Z) - Decentralized Entropy-Driven Ransomware Detection Using Autonomous Neural Graph Embeddings [0.0]
The framework operates on a distributed network of nodes, eliminating single points of failure and enhancing resilience against targeted attacks.<n>The integration of graph-based modeling and machine learning techniques enables the framework to capture complex system interactions.<n>Case studies validate its effectiveness in real-world scenarios, showcasing its ability to detect and mitigate ransomware attacks within minutes of their initiation.
arXiv Detail & Related papers (2025-02-11T11:59:10Z) - AnywhereDoor: Multi-Target Backdoor Attacks on Object Detection [9.539021752700823]
AnywhereDoor is a multi-target backdoor attack for object detection.<n>It allows adversaries to make objects disappear, fabricate new ones or mislabel them, either across all object classes or specific ones.<n>It improves attack success rates by 26% compared to adaptations of existing methods for such flexible control.
arXiv Detail & Related papers (2024-11-21T15:50:59Z) - Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - Free Lunch for Generating Effective Outlier Supervision [46.37464572099351]
We propose an ultra-effective method to generate near-realistic outlier supervision.
Our proposed textttBayesAug significantly reduces the false positive rate over 12.50% compared with the previous schemes.
arXiv Detail & Related papers (2023-01-17T01:46:45Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Towards Reducing Labeling Cost in Deep Object Detection [61.010693873330446]
We propose a unified framework for active learning, that considers both the uncertainty and the robustness of the detector.
Our method is able to pseudo-label the very confident predictions, suppressing a potential distribution drift.
arXiv Detail & Related papers (2021-06-22T16:53:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.