MOAT: Towards Safe BPF Kernel Extension
- URL: http://arxiv.org/abs/2301.13421v3
- Date: Fri, 7 Jun 2024 03:23:39 GMT
- Title: MOAT: Towards Safe BPF Kernel Extension
- Authors: Hongyi Lu, Shuai Wang, Yechang Wu, Wanning He, Fengwei Zhang,
- Abstract summary: The Linux kernel extensively uses the Berkeley Packet Filter (BPF) to allow user-written BPF applications to execute in the kernel space.
Recent attacks show that BPF programs can evade security checks and gain unauthorized access to kernel memory.
We present MOAT, a system that isolates potentially malicious BPF programs using Intel Memory Protection Keys (MPK)
- Score: 10.303142268182116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Linux kernel extensively uses the Berkeley Packet Filter (BPF) to allow user-written BPF applications to execute in the kernel space. The BPF employs a verifier to check the security of user-supplied BPF code statically. Recent attacks show that BPF programs can evade security checks and gain unauthorized access to kernel memory, indicating that the verification process is not flawless. In this paper, we present MOAT, a system that isolates potentially malicious BPF programs using Intel Memory Protection Keys (MPK). Enforcing BPF program isolation with MPK is not straightforward; MOAT is designed to alleviate technical obstacles, such as limited hardware keys and the need to protect a wide variety of BPF helper functions. We implement MOAT on Linux (ver. 6.1.38), and our evaluation shows that MOAT delivers low-cost isolation of BPF programs under mainstream use cases, such as isolating a BPF packet filter with only 3% throughput loss.
Related papers
- Bridge the Points: Graph-based Few-shot Segment Anything Semantically [79.1519244940518]
Recent advancements in pre-training techniques have enhanced the capabilities of vision foundation models.
Recent studies extend the SAM to Few-shot Semantic segmentation (FSS)
We propose a simple yet effective approach based on graph analysis.
arXiv Detail & Related papers (2024-10-09T15:02:28Z) - SafeBPF: Hardware-assisted Defense-in-depth for eBPF Kernel Extensions [1.0499611180329806]
We introduce SafeBPF, a general design that isolates eBPF programs from the rest of the kernel to prevent memory safety vulnerabilities from being exploited.
We show that SafeBPF incurs up to 4% overhead on macrobenchmarks while achieving desired security properties.
arXiv Detail & Related papers (2024-09-11T13:58:51Z) - VeriFence: Lightweight and Precise Spectre Defenses for Untrusted Linux Kernel Extensions [0.07696728525672149]
Linux's extended Berkeley Packet Filter (BPF) avoids user-/ kernel transitions by just-in-time compiling user-provided bytecode.
To mitigate the Spectre vulnerabilities disclosed in 2018, defenses which reject potentially-dangerous programs had to be deployed.
We propose VeriFence, an enhancement to the kernel's Spectre defenses that reduces the number of BPF application programs rejected from 54% to zero.
arXiv Detail & Related papers (2024-04-30T12:34:23Z) - KEN: Kernel Extensions using Natural Language [1.293634133244466]
KEN is a framework that allows Kernel Extensions to be written in Natural language.
It synthesizes an eBPF program given a user's English language prompt.
We show that KEN produces correct eBPF programs on 80% which is an improvement of a factor of 2.67 compared to an LLM-empowered program synthesis baseline.
arXiv Detail & Related papers (2023-12-09T10:45:54Z) - Iterative Shallow Fusion of Backward Language Model for End-to-End
Speech Recognition [48.328702724611496]
We propose a new shallow fusion (SF) method to exploit an external backward language model (BLM) for end-to-end automatic speech recognition (ASR)
We iteratively apply the BLM to partial ASR hypotheses in the backward direction (i.e., from the possible next token to the start symbol) during decoding, substituting the newly calculated BLM scores for the scores calculated at the last iteration.
In experiments using an attention-based encoder-decoder ASR system, we confirmed that ISF shows comparable performance with SF using the FLM.
arXiv Detail & Related papers (2023-10-17T05:44:10Z) - BRF: eBPF Runtime Fuzzer [3.895892630722353]
This paper introduces the BPF Fuzzer (BRF), a fuzzer that can satisfy the semantics and dependencies required by the verifier and the eBPF subsystem.
BRF achieves 101% higher code coverage. As a result, BRF has so far managed to find 4 vulnerabilities (some of them have been assigned runtime numbers) in the eBPF.
arXiv Detail & Related papers (2023-05-15T16:42:51Z) - Does Continual Learning Equally Forget All Parameters? [55.431048995662714]
Distribution shift (e.g., task or domain shift) in continual learning (CL) usually results in catastrophic forgetting of neural networks.
We study which modules in neural networks are more prone to forgetting by investigating their training dynamics during CL.
We propose a more efficient and simpler method that entirely removes the every-step replay and replaces them by only $k$-times of FPF periodically triggered during CL.
arXiv Detail & Related papers (2023-04-09T04:36:24Z) - Make Landscape Flatter in Differentially Private Federated Learning [69.78485792860333]
We propose a novel DPFL algorithm named DP-FedSAM, which leverages gradient perturbation to mitigate the negative impact of DP.
Specifically, DP-FedSAM integrates local flatness models with better stability and weight robustness, which results in the small norm of local updates and robustness to DP noise.
Our algorithm achieves state-of-the-art (SOTA) performance compared with existing SOTA baselines in DPFL.
arXiv Detail & Related papers (2023-03-20T16:27:36Z) - Revisiting Personalized Federated Learning: Robustness Against Backdoor
Attacks [53.81129518924231]
We conduct the first study of backdoor attacks in the pFL framework.
We show that pFL methods with partial model-sharing can significantly boost robustness against backdoor attacks.
We propose a lightweight defense method, Simple-Tuning, which empirically improves defense performance against backdoor attacks.
arXiv Detail & Related papers (2023-02-03T11:58:14Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - A flow-based IDS using Machine Learning in eBPF [3.631024220680066]
eBPF is a new technology which allows dynamically loading pieces of code into the Linux kernel.
We show that it is possible to develop a flow based network intrusion detection system based on machine learning entirely in eBPF.
arXiv Detail & Related papers (2021-02-19T15:20:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.