Noise Robust One-Class Intrusion Detection on Dynamic Graphs
- URL: http://arxiv.org/abs/2508.14192v1
- Date: Tue, 19 Aug 2025 18:36:11 GMT
- Title: Noise Robust One-Class Intrusion Detection on Dynamic Graphs
- Authors: Aleksei Liuliakov, Alexander Schulz, Luca Hermes, Barbara Hammer,
- Abstract summary: This study introduces a probabilistic version of the Temporal Graph Network Support Vector Data Description (TGN-SVDD) model, designed to enhance detection accuracy in the presence of input noise.<n>Our experiments on a modified CIC-IDS 2017 data set with synthetic noise demonstrate significant improvements in detection performance compared to the baseline TGN-SVDD model, especially as noise levels increase.
- Score: 46.453758431767724
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the domain of network intrusion detection, robustness against contaminated and noisy data inputs remains a critical challenge. This study introduces a probabilistic version of the Temporal Graph Network Support Vector Data Description (TGN-SVDD) model, designed to enhance detection accuracy in the presence of input noise. By predicting parameters of a Gaussian distribution for each network event, our model is able to naturally address noisy adversarials and improve robustness compared to a baseline model. Our experiments on a modified CIC-IDS2017 data set with synthetic noise demonstrate significant improvements in detection performance compared to the baseline TGN-SVDD model, especially as noise levels increase.
Related papers
- Transformer-Based Indirect Structural Health Monitoring of Rail Infrastructure with Attention-Driven Detection and Localization of Transient Defects [1.1782896991259]
We introduce an incremental synthetic data benchmark designed to evaluate model robustness against progressively complex challenges.<n>We evaluate several established unsupervised models alongside our proposed Attention-Focused Transformer.<n>Our proposed model achieves accuracy comparable to the state-of-the-art solution while demonstrating better inference speed.
arXiv Detail & Related papers (2025-10-08T23:01:53Z) - On the Shape of Latent Variables in a Denoising VAE-MoG: A Posterior Sampling-Based Study [51.56484100374058]
We explore the latent space of a denoising variational autoencoder with a mixture-of-Gaussians prior (VAE-MoG)<n>To evaluate how well the model captures the underlying structure, we use Hamiltonian Monte Carlo (HMC) to draw posterior samples conditioned on clean inputs, and compare them to the encoder's outputs from noisy data.<n>Although the model reconstructs signals accurately, statistical comparisons reveal a clear mismatch in the latent space.
arXiv Detail & Related papers (2025-09-29T18:33:09Z) - Detecting and Rectifying Noisy Labels: A Similarity-based Approach [4.686586017523293]
Label noise in datasets could significantly damage the performance and robustness of deep neural networks (DNNs) trained on these datasets.<n>We propose post-hoc, model-agnostic noise detection and rectification methods utilizing the penultimate feature from a DNN.<n>Our idea is based on the observation that the similarity between the penultimate feature of a mislabeled data point and its true class data points is higher than that for data points from other classes.
arXiv Detail & Related papers (2025-09-28T16:41:56Z) - Robust and Noise-resilient Long-Term Prediction of Spatiotemporal Data Using Variational Mode Graph Neural Networks with 3D Attention [11.356542363919058]
This paper focuses on improving the robustness of long-term prediction using aaltemporal variation mode graphal network (VMGCN)<n>The deep learning network for this task relies on historical data inputs, yet real-time data can be corrupted by sensor noise.<n>We model this noise as independent and identically distributed (i.i.d.) Gaussian noise and incorporate it into the LargeST traffic volume dataset.
arXiv Detail & Related papers (2025-04-09T07:49:45Z) - Noise Augmented Fine Tuning for Mitigating Hallucinations in Large Language Models [1.0579965347526206]
Large language models (LLMs) often produce inaccurate or misleading content-hallucinations.<n>Noise-Augmented Fine-Tuning (NoiseFiT) is a novel framework that leverages adaptive noise injection to enhance model robustness.<n>NoiseFiT selectively perturbs layers identified as either high-SNR (more robust) or low-SNR (potentially under-regularized) using a dynamically scaled Gaussian noise.
arXiv Detail & Related papers (2025-04-04T09:27:19Z) - Noisy Test-Time Adaptation in Vision-Language Models [73.14136220844156]
Test-time adaptation (TTA) aims to address distribution shifts between source and target data by relying solely on target data during testing.<n>This paper introduces Zero-Shot Noisy TTA (ZS-NTTA), focusing on adapting the model to target data with noisy samples during test-time in a zero-shot manner.<n>We introduce the Adaptive Noise Detector (AdaND), which utilizes the frozen model's outputs as pseudo-labels to train a noise detector.
arXiv Detail & Related papers (2025-02-20T14:37:53Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Improving the Robustness of Summarization Models by Detecting and
Removing Input Noise [50.27105057899601]
We present a large empirical study quantifying the sometimes severe loss in performance from different types of input noise for a range of datasets and model sizes.
We propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any training, auxiliary models, or even prior knowledge of the type of noise.
arXiv Detail & Related papers (2022-12-20T00:33:11Z) - ADASYN-Random Forest Based Intrusion Detection Model [0.0]
Intrusion detection has been a key topic in the field of cyber security, and the common network threats nowadays have the characteristics of varieties and variation.
Considering the serious imbalance of intrusion detection datasets, using ADASYN oversampling method to balance datasets was proposed.
It has better performance, generalization ability and robustness compared with traditional machine learning models.
arXiv Detail & Related papers (2021-05-10T12:22:36Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.