Novelty Detection in Network Traffic: Using Survival Analysis for
Feature Identification
- URL: http://arxiv.org/abs/2301.06229v1
- Date: Mon, 16 Jan 2023 01:40:29 GMT
- Title: Novelty Detection in Network Traffic: Using Survival Analysis for
Feature Identification
- Authors: Taylor Bradley, Elie Alhajjar, Nathaniel Bastian
- Abstract summary: Intrusion Detection Systems are an important component of many organizations' cyber defense and resiliency strategies.
One downside of these systems is their reliance on known attack signatures for detection of malicious network events.
We introduce an unconventional approach to identifying network traffic features that influence novelty detection based on survival analysis techniques.
- Score: 1.933681537640272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intrusion Detection Systems are an important component of many organizations'
cyber defense and resiliency strategies. However, one downside of these systems
is their reliance on known attack signatures for detection of malicious network
events. When it comes to unknown attack types and zero-day exploits, modern
Intrusion Detection Systems often fall short. In this paper, we introduce an
unconventional approach to identifying network traffic features that influence
novelty detection based on survival analysis techniques. Specifically, we
combine several Cox proportional hazards models and implement Kaplan-Meier
estimates to predict the probability that a classifier identifies novelty after
the injection of an unknown network attack at any given time. The proposed
model is successful at pinpointing PSH Flag Count, ACK Flag Count, URG Flag
Count, and Down/Up Ratio as the main features to impact novelty detection via
Random Forest, Bayesian Ridge, and Linear Support Vector Regression
classifiers.
Related papers
- Feature Selection for Network Intrusion Detection [3.7414804164475983]
We present a novel information-theoretic method that facilitates the exclusion of non-informative features when detecting network intrusions.
The proposed method is based on function approximation using a neural network, which enables a version of our approach that incorporates a recurrent layer.
arXiv Detail & Related papers (2024-11-18T14:25:55Z) - Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - Adversarially Robust One-class Novelty Detection [83.1570537254877]
We show that existing novelty detectors are susceptible to adversarial examples.
We propose a defense strategy that manipulates the latent space of novelty detectors to improve the robustness against adversarial examples.
arXiv Detail & Related papers (2021-08-25T10:41:29Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - Experimental Review of Neural-based approaches for Network Intrusion
Management [8.727349339883094]
We provide an experimental-based review of neural-based methods applied to intrusion detection issues.
We offer a complete view of the most prominent neural-based techniques relevant to intrusion detection, including deep-based approaches or weightless neural networks.
Our evaluation quantifies the value of neural networks, particularly when state-of-the-art datasets are used to train the models.
arXiv Detail & Related papers (2020-09-18T18:32:24Z) - Enhancing Robustness Against Adversarial Examples in Network Intrusion
Detection Systems [1.7386735294534732]
RePO is a new mechanism to build an NIDS with the help of denoising autoencoders capable of detecting different types of network attacks in a low false alert setting.
Our evaluation shows denoising autoencoders can improve detection of malicious traffic by up to 29% in a normal setting and by up to 45% in an adversarial setting.
arXiv Detail & Related papers (2020-08-09T07:04:06Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z) - Detecting Adversarial Examples for Speech Recognition via Uncertainty
Quantification [21.582072216282725]
Machine learning systems and, specifically, automatic speech recognition (ASR) systems are vulnerable to adversarial attacks.
In this paper, we focus on hybrid ASR systems and compare four acoustic models regarding their ability to indicate uncertainty under attack.
We are able to detect adversarial examples with an area under the receiving operator curve score of more than 0.99.
arXiv Detail & Related papers (2020-05-24T19:31:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.