Behavioral Analytics for Continuous Insider Threat Detection in Zero-Trust Architectures
- URL: http://arxiv.org/abs/2601.06708v1
- Date: Sat, 10 Jan 2026 22:30:19 GMT
- Title: Behavioral Analytics for Continuous Insider Threat Detection in Zero-Trust Architectures
- Authors: Gaurav Sarraf,
- Abstract summary: This framework makes use of the CERT Insider Threat dataset for data cleaning, normalization, and class balance.<n>It also employs Principal Component Analysis (PCA) for dimensionality reduction.<n>Compared to SVM (90.1%), ANN (94.7%), and Bayes Net (94.9), AdaBoost achieved higher performance with a 98.0% ACC, 98.3% PRE, 98.0% REC, and F1-score (F1)
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Insider threats are a particularly tricky cybersecurity issue, especially in zero-trust architectures (ZTA) where implicit trust is removed. Although the rule of thumb is never trust, always verify, attackers can still use legitimate credentials and impersonate the standard user activity. In response, behavioral analytics with machine learning (ML) can help monitor the user activity continuously and identify the presence of anomalies. This introductory framework makes use of the CERT Insider Threat Dataset for data cleaning, normalization, and class balance using the Synthetic Minority Oversampling Technique (SMOTE). It also employs Principal Component Analysis (PCA) for dimensionality reduction. Several benchmark models, including Support Vector Machine (SVM), Artificial Neural Network (ANN), and Bayesian Network (Bayes Net), were used to develop and evaluate the AdaBoost classifier. Compared to SVM (90.1%), ANN (94.7%), and Bayes Net (94.9), AdaBoost achieved higher performance with a 98.0% ACC, 98.3% PRE, 98.0% REC, and F1-score (F1). The Receiver Operating Characteristic (ROC) study, which provided further confirmation of its strength, yielded an Area Under the Curve (AUC) of 0.98. These results prove the effectiveness and dependability of AdaBoost-based behavioral analytics as a solution to reinforcing continuous insider threat detection in zero-trust settings.
Related papers
- ThreatFormer-IDS: Robust Transformer Intrusion Detection with Zero-Day Generalization and Explainable Attribution [0.0]
Intrusion detection in IoT and industrial networks requires models that can detect rare attacks at low false-positive rates while remaining reliable under evolving traffic and limited labels.<n>We propose ThreatFormer- IDS, a Transformer-based sequence modeling framework that converts flow records into time-ordered windows and learns contextual representations for robust intrusion screening.<n>On the ToN IoT benchmark with chronological evaluation, ThreatFormer-IDS achieves AUCROC 0.994, AUC-PR 0.956, and Recall@1%FPR 0.910, outperforming strong tree-based and sequence baselines.
arXiv Detail & Related papers (2026-02-26T23:20:42Z) - ReasAlign: Reasoning Enhanced Safety Alignment against Prompt Injection Attack [52.17935054046577]
We present ReasAlign, a model-level solution to improve safety alignment against indirect prompt injection attacks.<n>ReasAlign incorporates structured reasoning steps to analyze user queries, detect conflicting instructions, and preserve the continuity of the user's intended tasks.
arXiv Detail & Related papers (2026-01-15T08:23:38Z) - Hyperparameter Tuning-Based Optimized Performance Analysis of Machine Learning Algorithms for Network Intrusion Detection [0.22940141855172033]
Network Intrusion Systems (NIDS) are essential for securing networks by identifying and mitigating unauthorized activities.<n>This study explores the application of machine learning (ML) methods to improve the NIDS accuracy.
arXiv Detail & Related papers (2025-12-14T15:02:48Z) - Natural Geometry of Robust Data Attribution: From Convex Models to Deep Networks [9.553350856191743]
We present a unified framework for robust attribution that extends from convex models to deep networks.<n>For convex settings, we derive Wasserstein-Robust Influence Functions (W-RIF) with provable coverage guarantees.<n>For deep networks, we demonstrate that Euclidean certification is rendered vacuous by spectral amplification.
arXiv Detail & Related papers (2025-12-09T20:40:27Z) - VISION: Robust and Interpretable Code Vulnerability Detection Leveraging Counterfactual Augmentation [6.576811224645293]
Graph Neural Networks (GNNs) can learn structural and logical code relationships in a data-driven manner.<n>GNNs often learn'spurious' correlations from superficial code similarities.<n>We propose a unified framework for robust and interpretable vulnerability detection, called VISION.
arXiv Detail & Related papers (2025-08-26T11:20:39Z) - A Vision-Language Pre-training Model-Guided Approach for Mitigating Backdoor Attacks in Federated Learning [43.847168319564844]
We propose an FL backdoor defense framework, named CLIP-Fed, that utilizes the zero-shot learning capabilities of vision-language pre-training models.<n>Our scheme overcomes the limitations of Non-IID imposed on defense effectiveness by integrating pre-aggregation and post-aggregation defense strategies.
arXiv Detail & Related papers (2025-08-14T03:39:54Z) - Beyond Benchmarks: Dynamic, Automatic And Systematic Red-Teaming Agents For Trustworthy Medical Language Models [87.66870367661342]
Large language models (LLMs) are used in AI applications in healthcare.<n>Red-teaming framework that continuously stress-test LLMs can reveal significant weaknesses in four safety-critical domains.<n>A suite of adversarial agents is applied to autonomously mutate test cases, identify/evolve unsafe-triggering strategies, and evaluate responses.<n>Our framework delivers an evolvable, scalable, and reliable safeguard for the next generation of medical AI.
arXiv Detail & Related papers (2025-07-30T08:44:22Z) - Towards Trustworthy Keylogger detection: A Comprehensive Analysis of Ensemble Techniques and Feature Selections through Explainable AI [0.0]
Keylogger detection involves monitoring for unusual system behaviors such as delays between typing and character display.<n>In this study, we provide a comprehensive analysis for keylogger detection with traditional machine learning models.
arXiv Detail & Related papers (2025-05-22T01:04:13Z) - Enhancing IoT Cyber Attack Detection in the Presence of Highly Imbalanced Data [0.0]
This study uses hybrid sampling techniques to improve data imbalance detection accuracy in IoT domains.<n>We evaluate the performance of several machine learning models with respect to the classification of cyber-attacks.<n>Overall, this work demonstrates the value of hybrid sampling combined with robust model and feature selection for significantly improving IoT security.
arXiv Detail & Related papers (2025-05-15T14:02:48Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Out-of-Distribution Detection with Hilbert-Schmidt Independence
Optimization [114.43504951058796]
Outlier detection tasks have been playing a critical role in AI safety.
Deep neural network classifiers usually tend to incorrectly classify out-of-distribution (OOD) inputs into in-distribution classes with high confidence.
We propose an alternative probabilistic paradigm that is both practically useful and theoretically viable for the OOD detection tasks.
arXiv Detail & Related papers (2022-09-26T15:59:55Z) - Explicit Tradeoffs between Adversarial and Natural Distributional
Robustness [48.44639585732391]
In practice, models need to enjoy both types of robustness to ensure reliability.
In this work, we show that in fact, explicit tradeoffs exist between adversarial and natural distributional robustness.
arXiv Detail & Related papers (2022-09-15T19:58:01Z) - Mitigating Neural Network Overconfidence with Logit Normalization [37.106755943446515]
neural networks produce abnormally high confidence for both in- and out-of-distribution inputs.
We show that this issue can be mitigated through Logit Normalization (LogitNorm)
Our method is motivated by the analysis that the norm of the logit keeps increasing during training, leading to overconfident output.
arXiv Detail & Related papers (2022-05-19T03:45:18Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - RobustBench: a standardized adversarial robustness benchmark [84.50044645539305]
Key challenge in benchmarking robustness is that its evaluation is often error-prone leading to robustness overestimation.
We evaluate adversarial robustness with AutoAttack, an ensemble of white- and black-box attacks.
We analyze the impact of robustness on the performance on distribution shifts, calibration, out-of-distribution detection, fairness, privacy leakage, smoothness, and transferability.
arXiv Detail & Related papers (2020-10-19T17:06:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.