Robustness of ML-Enhanced IDS to Stealthy Adversaries
- URL: http://arxiv.org/abs/2104.10742v1
- Date: Wed, 21 Apr 2021 20:00:31 GMT
- Title: Robustness of ML-Enhanced IDS to Stealthy Adversaries
- Authors: Vance Wong and John Emanuello
- Abstract summary: Intrusion Detection Systems (IDS) have demonstrated the capacity to efficiently build a prototype of "normal" cyber behaviors.
Because these are largely black boxes, their acceptance requires proof of robustness to stealthy adversaries.
We train an autoencoder-based anomaly detection system on network activity with various proportions of malicious activity mixed in and demonstrate that they are robust to this sort of poisoning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intrusion Detection Systems (IDS) enhanced with Machine Learning (ML) have
demonstrated the capacity to efficiently build a prototype of "normal" cyber
behaviors in order to detect cyber threats' activity with greater accuracy than
traditional rule-based IDS. Because these are largely black boxes, their
acceptance requires proof of robustness to stealthy adversaries. Since it is
impossible to build a baseline from activity completely clean of that of
malicious cyber actors (outside of controlled experiments), the training data
for deployed models will be poisoned with examples of activity that analysts
would want to be alerted about. We train an autoencoder-based anomaly detection
system on network activity with various proportions of malicious activity mixed
in and demonstrate that they are robust to this sort of poisoning.
Related papers
- Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - usfAD Based Effective Unknown Attack Detection Focused IDS Framework [3.560574387648533]
Internet of Things (IoT) and Industrial Internet of Things (IIoT) have led to an increasing range of cyber threats.
For more than a decade, researchers have delved into supervised machine learning techniques to develop Intrusion Detection System (IDS)
IDS trained and tested on known datasets fails in detecting zero-day or unknown attacks.
We propose two strategies for semi-supervised learning based IDS where training samples of attacks are not required.
arXiv Detail & Related papers (2024-03-17T11:49:57Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - Adv-Bot: Realistic Adversarial Botnet Attacks against Network Intrusion
Detection Systems [0.7829352305480285]
A growing number of researchers are recently investigating the feasibility of such attacks against machine learning-based security systems.
This study was to investigate the actual feasibility of adversarial attacks, specifically evasion attacks, against network-based intrusion detection systems.
Our goal is to create adversarial botnet traffic that can avoid detection while still performing all of its intended malicious functionality.
arXiv Detail & Related papers (2023-03-12T14:01:00Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Intrusion Detection using Network Traffic Profiling and Machine Learning
for IoT [2.309914459672557]
A single compromised device can have an impact on the whole network and lead to major security and physical damages.
This paper explores the potential of using network profiling and machine learning to secure IoT against cyber-attacks.
arXiv Detail & Related papers (2021-09-06T15:30:10Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z) - Adversarial Attacks on Machine Learning Cybersecurity Defences in
Industrial Control Systems [2.86989372262348]
This paper explores how adversarial learning can be used to target supervised models by generating adversarial samples.
It also explores how such samples can support the robustness of supervised models using adversarial training.
Overall, the classification performance of two widely used classifiers, Random Forest and J48, decreased by 16 and 20 percentage points when adversarial samples were present.
arXiv Detail & Related papers (2020-04-10T12:05:33Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.