A flow-based IDS using Machine Learning in eBPF
- URL: http://arxiv.org/abs/2102.09980v1
- Date: Fri, 19 Feb 2021 15:20:51 GMT
- Title: A flow-based IDS using Machine Learning in eBPF
- Authors: Maximilian Bachl, Joachim Fabini, Tanja Zseby
- Abstract summary: eBPF is a new technology which allows dynamically loading pieces of code into the Linux kernel.
We show that it is possible to develop a flow based network intrusion detection system based on machine learning entirely in eBPF.
- Score: 3.631024220680066
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: eBPF is a new technology which allows dynamically loading pieces of code into
the Linux kernel. It can greatly speed up networking since it enables the
kernel to process certain packets without the involvement of a userspace
program. So far eBPF has been used for simple packet filtering applications
such as firewalls or Denial of Service protection. We show that it is possible
to develop a flow based network intrusion detection system based on machine
learning entirely in eBPF. Our solution uses a decision tree and decides for
each packet whether it is malicious or not, considering the entire previous
context of the network flow. We achieve a performance increase of over 20\%
compared to the same solution implemented as a userspace program.
Related papers
- Ransomware Detection Using Machine Learning in the Linux Kernel [0.0]
Linux-based cloud environments have become lucrative targets for ransomware attacks.
We propose leveraging the extended Berkeley Packet Filter (eBPF) to collect system call information regarding active processes and infer about the data directly at the kernel level.
arXiv Detail & Related papers (2024-09-10T12:17:23Z) - VeriFence: Lightweight and Precise Spectre Defenses for Untrusted Linux Kernel Extensions [0.07696728525672149]
Linux's extended Berkeley Packet Filter (BPF) avoids user-/ kernel transitions by just-in-time compiling user-provided bytecode.
To mitigate the Spectre vulnerabilities disclosed in 2018, defenses which reject potentially-dangerous programs had to be deployed.
We propose VeriFence, an enhancement to the kernel's Spectre defenses that reduces the number of BPF application programs rejected from 54% to zero.
arXiv Detail & Related papers (2024-04-30T12:34:23Z) - Secure Deep Learning-based Distributed Intelligence on Pocket-sized
Drones [75.80952211739185]
Palm-sized nano-drones are an appealing class of edge nodes, but their limited computational resources prevent running large deep-learning models onboard.
Adopting an edge-fog computational paradigm, we can offload part of the computation to the fog; however, this poses security concerns if the fog node, or the communication link, can not be trusted.
We propose a novel distributed edge-fog execution scheme that validates fog computation by redundantly executing a random subnetwork aboard our nano-drone.
arXiv Detail & Related papers (2023-07-04T08:29:41Z) - Reconfigurable Distributed FPGA Cluster Design for Deep Learning
Accelerators [59.11160990637615]
We propose a distributed system based on lowpower embedded FPGAs designed for edge computing applications.
The proposed system can simultaneously execute diverse Neural Network (NN) models, arrange the graph in a pipeline structure, and manually allocate greater resources to the most computationally intensive layers of the NN graph.
arXiv Detail & Related papers (2023-05-24T16:08:55Z) - BRF: eBPF Runtime Fuzzer [3.895892630722353]
This paper introduces the BPF Fuzzer (BRF), a fuzzer that can satisfy the semantics and dependencies required by the verifier and the eBPF subsystem.
BRF achieves 101% higher code coverage. As a result, BRF has so far managed to find 4 vulnerabilities (some of them have been assigned runtime numbers) in the eBPF.
arXiv Detail & Related papers (2023-05-15T16:42:51Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - MOAT: Towards Safe BPF Kernel Extension [10.303142268182116]
The Linux kernel extensively uses the Berkeley Packet Filter (BPF) to allow user-written BPF applications to execute in the kernel space.
Recent attacks show that BPF programs can evade security checks and gain unauthorized access to kernel memory.
We present MOAT, a system that isolates potentially malicious BPF programs using Intel Memory Protection Keys (MPK)
arXiv Detail & Related papers (2023-01-31T05:31:45Z) - End-to-End Sensitivity-Based Filter Pruning [49.61707925611295]
We present a sensitivity-based filter pruning algorithm (SbF-Pruner) to learn the importance scores of filters of each layer end-to-end.
Our method learns the scores from the filter weights, enabling it to account for the correlations between the filters of each layer.
arXiv Detail & Related papers (2022-04-15T10:21:05Z) - Group Sparsity: The Hinge Between Filter Pruning and Decomposition for
Network Compression [145.04742985050808]
We analyze two popular network compression techniques, i.e. filter pruning and low-rank decomposition, in a unified sense.
By changing the way the sparsity regularization is enforced, filter pruning and low-rank decomposition can be derived accordingly.
Our approach proves its potential as it compares favorably to the state-of-the-art on several benchmarks.
arXiv Detail & Related papers (2020-03-19T17:57:26Z) - SparseIDS: Learning Packet Sampling with Reinforcement Learning [1.978587235008588]
Recurrent Neural Networks (RNNs) have been shown to be valuable for constructing Intrusion Detection Systems (IDSs) for network data.
We show that by using a novel Reinforcement Learning (RL)-based approach called SparseIDS, we can reduce the number of consumed packets by more than three fourths.
arXiv Detail & Related papers (2020-02-10T15:38:38Z) - A "Network Pruning Network" Approach to Deep Model Compression [62.68120664998911]
We present a filter pruning approach for deep model compression using a multitask network.
Our approach is based on learning a a pruner network to prune a pre-trained target network.
The compressed model produced by our approach is generic and does not need any special hardware/software support.
arXiv Detail & Related papers (2020-01-15T20:38:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.