Explaining Machine Learning DGA Detectors from DNS Traffic Data
- URL: http://arxiv.org/abs/2208.05285v1
- Date: Wed, 10 Aug 2022 11:34:26 GMT
- Title: Explaining Machine Learning DGA Detectors from DNS Traffic Data
- Authors: Giorgio Piras, Maura Pintor, Luca Demetrio and Battista Biggio
- Abstract summary: This work addresses the problem of Explainable ML in the context of botnet and DGA detection.
It is the first to concretely break down the decisions of ML classifiers when devised for botnet/DGA detection.
- Score: 11.049278217301048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the most common causes of lack of continuity of online systems stems
from a widely popular Cyber Attack known as Distributed Denial of Service
(DDoS), in which a network of infected devices (botnet) gets exploited to flood
the computational capacity of services through the commands of an attacker.
This attack is made by leveraging the Domain Name System (DNS) technology
through Domain Generation Algorithms (DGAs), a stealthy connection strategy
that yet leaves suspicious data patterns. To detect such threats, advances in
their analysis have been made. For the majority, they found Machine Learning
(ML) as a solution, which can be highly effective in analyzing and classifying
massive amounts of data. Although strongly performing, ML models have a certain
degree of obscurity in their decision-making process. To cope with this
problem, a branch of ML known as Explainable ML tries to break down the
black-box nature of classifiers and make them interpretable and human-readable.
This work addresses the problem of Explainable ML in the context of botnet and
DGA detection, which at the best of our knowledge, is the first to concretely
break down the decisions of ML classifiers when devised for botnet/DGA
detection, therefore providing global and local explanations.
Related papers
- LGB: Language Model and Graph Neural Network-Driven Social Bot Detection [43.92522451274129]
Malicious social bots achieve their malicious purposes by spreading misinformation and inciting social public opinion.
We propose a novel social bot detection framework LGB, which consists of two main components: language model (LM) and graph neural network (GNN)
Experiments on two real-world datasets demonstrate that LGB consistently outperforms state-of-the-art baseline models by up to 10.95%.
arXiv Detail & Related papers (2024-06-13T02:47:38Z) - MONDEO: Multistage Botnet Detection [2.259031129687683]
MONDEO is a multistage mechanism to detect DNS-based botnet malware.
It comprises four detection stages: Blacklisting/Whitelisting, Query rate analysis, DGA analysis, and Machine learning evaluation.
MONDEO was tested against several datasets to measure its efficiency and performance.
arXiv Detail & Related papers (2023-08-31T09:12:30Z) - LMBot: Distilling Graph Knowledge into Language Model for Graph-less
Deployment in Twitter Bot Detection [41.043975659303435]
We propose a novel bot detection framework LMBot that distills the knowledge of graph neural networks (GNNs) into language models (LMs)
For graph-based datasets, the output of LMs provides input features for the GNN, enabling it to optimize for bot detection and distill knowledge back to the LM in an iterative, mutually enhancing process.
Our experiments demonstrate that LMBot achieves state-of-the-art performance on four Twitter bot detection benchmarks.
arXiv Detail & Related papers (2023-06-30T05:50:26Z) - Adv-Bot: Realistic Adversarial Botnet Attacks against Network Intrusion
Detection Systems [0.7829352305480285]
A growing number of researchers are recently investigating the feasibility of such attacks against machine learning-based security systems.
This study was to investigate the actual feasibility of adversarial attacks, specifically evasion attacks, against network-based intrusion detection systems.
Our goal is to create adversarial botnet traffic that can avoid detection while still performing all of its intended malicious functionality.
arXiv Detail & Related papers (2023-03-12T14:01:00Z) - Augmenting Rule-based DNS Censorship Detection at Scale with Machine
Learning [38.00013408742201]
Censorship of the domain name system (DNS) is a key mechanism used across different countries.
In this paper, we explore how machine learning (ML) models can help streamline the detection process.
We find that unsupervised models, trained solely on uncensored instances, can identify new instances and variations of censorship missed by existing probes.
arXiv Detail & Related papers (2023-02-03T23:36:30Z) - Detection, Explanation and Filtering of Cyber Attacks Combining Symbolic
and Sub-Symbolic Methods [0.0]
We are exploring combining symbolic and sub-symbolic methods in the area of cybersecurity that incorporate domain knowledge.
The proposed method is shown to produce intuitive explanations for alerts for a diverse range of scenarios.
Not only do the explanations provide deeper insights into the alerts, but they also lead to a reduction of false positive alerts by 66% and by 93% when including the fidelity metric.
arXiv Detail & Related papers (2022-12-23T09:03:51Z) - Adversarial Machine Learning Threat Analysis in Open Radio Access
Networks [37.23982660941893]
The Open Radio Access Network (O-RAN) is a new, open, adaptive, and intelligent RAN architecture.
In this paper, we present a systematic adversarial machine learning threat analysis for the O-RAN.
arXiv Detail & Related papers (2022-01-16T17:01:38Z) - Utilizing XAI technique to improve autoencoder based model for computer
network anomaly detection with shapley additive explanation(SHAP) [0.0]
Machine learning (ML) and Deep Learning (DL) methods are being adopted rapidly, especially in computer network security.
Lack of transparency of ML and DL based models is a major obstacle to their implementation and criticized due to its black-box nature.
XAI is a promising area that can improve the trustworthiness of these models by giving explanations and interpreting its output.
arXiv Detail & Related papers (2021-12-14T09:42:04Z) - Explaining Network Intrusion Detection System Using Explainable AI
Framework [0.5076419064097734]
Intrusion detection system is one of the important layers in cyber safety in today's world.
In this paper, we have used deep neural network for network intrusion detection.
We also proposed explainable AI framework to add transparency at every stage of machine learning pipeline.
arXiv Detail & Related papers (2021-03-12T07:15:09Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Automating Botnet Detection with Graph Neural Networks [106.24877728212546]
Botnets are now a major source for many network attacks, such as DDoS attacks and spam.
In this paper, we consider the neural network design challenges of using modern deep learning techniques to learn policies for botnet detection automatically.
arXiv Detail & Related papers (2020-03-13T15:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.