BYOS: Knowledge-driven Large Language Models Bring Your Own Operating System More Excellent
- URL: http://arxiv.org/abs/2503.09663v2
- Date: Thu, 29 May 2025 00:35:55 GMT
- Title: BYOS: Knowledge-driven Large Language Models Bring Your Own Operating System More Excellent
- Authors: Hongyu Lin, Yuchen Li, Haoran Luo, Kaichun Yao, Libo Zhang, Mingjie Xing, Yanjun Wu,
- Abstract summary: kernel tuning involves systematically adjusting kernel configurations to optimize system performance.<n>Despite recent advancements in large language models (LLMs), kernel tuning remains a critical challenge.<n>We propose BYOS, a framework that automates a LLM-powered framework for kernel tuning.
- Score: 32.81416809245337
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Operating System (OS) kernel tuning involves systematically adjusting kernel configurations to optimize system performance. Despite recent advancements in large language models (LLMs), kernel tuning remains a critical challenge due to: (1) the semantic gap between abstract tuning objective and concrete config options, (2) insufficient environmental interaction induces LLM hallucinations, and (3) the rapid evolution of kernel versions. To address these challenges, we propose BYOS, a LLM-powered framework that automates kernel tuning through three key innovations: structured knowledge construction and mapping, knowledge-driven configuration generation, and continuous knowledge maintenance. Extensive experiments show that BYOS achieves 7.1%-155.4% performance improvements over default configurations across standard OS benchmarks and real-world applications, demonstrating structured knowledge representation can overcome key limitations of pure LLM solutions for system optimization. Our code is available at https://github.com/LHY-24/BYOS.
Related papers
- MultiKernelBench: A Multi-Platform Benchmark for Kernel Generation [17.461533973039064]
MultiKernelBench is a benchmark for the generation of deep learning kernels using large language models (LLMs)<n>It spans 285 tasks across 14 well-defined kernel categories and supports three major hardware platforms.<n>We show significant variation in task difficulty, poor generalization to platforms with less training exposure, and the effectiveness of targeted prompting strategies.
arXiv Detail & Related papers (2025-07-20T00:58:33Z) - SysLLMatic: Large Language Models are Software System Optimizers [2.4416377721219145]
We present SysLLMatic, a system that integrates Large Language Models with profiling-guided feedback and system performance insights.<n>We evaluate it on three benchmark suites: HumanEval_Bench (competitive programming in C++), SciMark2 (scientific kernels in Java), and DaCapoBench (large-scale software systems in Java)
arXiv Detail & Related papers (2025-06-02T01:57:21Z) - Pangu Embedded: An Efficient Dual-system LLM Reasoner with Metacognition [95.54406667705999]
Pangu Embedded is an efficient Large Language Model (LLM) reasoner developed on Ascend Neural Processing Units (NPUs)<n>It addresses the significant computational costs and inference latency challenges prevalent in existing reasoning-optimized LLMs.<n>It delivers rapid responses and state-of-the-art reasoning quality within a single, unified model architecture.
arXiv Detail & Related papers (2025-05-28T14:03:02Z) - CrashFixer: A crash resolution agent for the Linux kernel [58.152358195983155]
This work builds upon kGym, which shares a benchmark for system-level Linux kernel bugs and a platform to run experiments on the Linux kernel.
This paper introduces CrashFixer, the first LLM-based software repair agent that is applicable to Linux kernel bugs.
arXiv Detail & Related papers (2025-04-29T04:18:51Z) - Efficient Multi-Instance Generation with Janus-Pro-Dirven Prompt Parsing [53.295515505026096]
Janus-Pro-driven Prompt Parsing is a prompt- parsing module that bridges text understanding and layout generation.<n>MIGLoRA is a parameter-efficient plug-in integrating Low-Rank Adaptation into UNet (SD1.5) and DiT (SD3) backbones.<n>The proposed method achieves state-of-the-art performance on COCO and LVIS benchmarks while maintaining parameter efficiency.
arXiv Detail & Related papers (2025-03-27T00:59:14Z) - SEKI: Self-Evolution and Knowledge Inspiration based Neural Architecture Search via Large Language Models [11.670056503731905]
We introduce SEKI, a novel large language model (LLM)-based neural architecture search (NAS) method.<n>Inspired by the chain-of-thought (CoT) paradigm in modern LLMs, SEKI operates in two key stages: self-evolution and knowledge distillation.
arXiv Detail & Related papers (2025-02-27T09:17:49Z) - Tackling the Dynamicity in a Production LLM Serving System with SOTA Optimizations via Hybrid Prefill/Decode/Verify Scheduling on Efficient Meta-kernels [12.77187564450236]
We introduce XY-Serve, a versatile, Ascend native, end-to-end production large language model (LLM) serving system.<n>The core idea is an abstraction mechanism that smooths out the workload variability by decomposing computations into fine-grained meta primitives.<n>For GEMM, we introduce a virtual padding scheme that adapts to dynamic shape changes while using highly efficient GEMM primitives with assorted fixed tile sizes.
arXiv Detail & Related papers (2024-12-24T02:27:44Z) - HOBBIT: A Mixed Precision Expert Offloading System for Fast MoE Inference [54.40808356999408]
We present HOBBIT, a mixed precision expert offloading system to enable flexible and efficient MoE inference.
Our key insight is that dynamically replacing less critical cache-miss experts with low precision versions can substantially reduce expert-loading latency.
HOBBIT achieves up to a 9.93x speedup in decoding compared to state-of-the-art MoE offloading systems.
arXiv Detail & Related papers (2024-11-03T04:25:46Z) - Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design [59.00758127310582]
We propose a novel framework Read-ME that transforms pre-trained dense LLMs into smaller MoE models.
Our approach employs activation sparsity to extract experts.
Read-ME outperforms other popular open-source dense models of similar scales.
arXiv Detail & Related papers (2024-10-24T19:48:51Z) - Liger Kernel: Efficient Triton Kernels for LLM Training [6.373771349397682]
Training Large Language Models (LLMs) efficiently at scale presents a formidable challenge, driven by their ever-increasing computational demands.<n>We introduce Liger- Kernel, an open-sourced set of Triton kernels developed specifically for LLM training.<n>With kernel optimization techniques like kernel operation fusing and input chunking, our kernels achieve on average a 20% increase in training throughput and a 60% reduction in GPU memory usage.
arXiv Detail & Related papers (2024-10-14T18:17:01Z) - Optimal Kernel Tuning Parameter Prediction using Deep Sequence Models [0.44998333629984877]
We propose a methodology that uses deep sequence- to-sequence models to predict the optimal tuning parameters governing compute kernels.
The proposed algorithm can achieve more than 90% accuracy on various convolutional kernels in MIOpen, the AMD machine learning primitives library.
arXiv Detail & Related papers (2024-04-15T22:25:54Z) - KernelGPT: Enhanced Kernel Fuzzing via Large Language Models [8.77369393651381]
We propose KernelGPT, the first approach to automatically synthesizing syscall specifications via Large Language Models (LLMs)
Our results demonstrate that KernelGPT can generate more new and valid specifications and achieve higher coverage than state-of-the-art techniques.
arXiv Detail & Related papers (2023-12-31T18:47:33Z) - Harnessing Deep Learning and HPC Kernels via High-Level Loop and Tensor Abstractions on CPU Architectures [67.47328776279204]
This work introduces a framework to develop efficient, portable Deep Learning and High Performance Computing kernels.
We decompose the kernel development in two steps: 1) Expressing the computational core using Processing Primitives (TPPs) and 2) Expressing the logical loops around TPPs in a high-level, declarative fashion.
We demonstrate the efficacy of our approach using standalone kernels and end-to-end workloads that outperform state-of-the-art implementations on diverse CPU platforms.
arXiv Detail & Related papers (2023-04-25T05:04:44Z) - Empowering parameter-efficient transfer learning by recognizing the
kernel structure in self-attention [53.72897232951918]
We propose adapters that utilize the kernel structure in self-attention to guide the assignment of tunable parameters.
Our results show that our proposed adapters can attain or improve the strong performance of existing baselines.
arXiv Detail & Related papers (2022-05-07T20:52:54Z) - Kernel Identification Through Transformers [54.3795894579111]
Kernel selection plays a central role in determining the performance of Gaussian Process (GP) models.
This work addresses the challenge of constructing custom kernel functions for high-dimensional GP regression models.
We introduce a novel approach named KITT: Kernel Identification Through Transformers.
arXiv Detail & Related papers (2021-06-15T14:32:38Z) - Flow-based Kernel Prior with Application to Blind Super-Resolution [143.21527713002354]
Kernel estimation is generally one of the key problems for blind image super-resolution (SR)
This paper proposes a normalizing flow-based kernel prior (FKP) for kernel modeling.
Experiments on synthetic and real-world images demonstrate that the proposed FKP can significantly improve the kernel estimation accuracy.
arXiv Detail & Related papers (2021-03-29T22:37:06Z) - Performance portability through machine learning guided kernel selection
in SYCL libraries [0.0]
General purpose compute libraries must be able to cater to all inputs and parameters provided by a user.
Machine learning methods can be used to mitigate against both of these problems.
tuning the process for new hardware or problems does not require any developer effort or expertise.
arXiv Detail & Related papers (2020-08-30T11:44:37Z) - Sapphire: Automatic Configuration Recommendation for Distributed Storage
Systems [11.713288567936875]
tuning parameters can provide significant performance gains but is a difficult task requiring profound experience and expertise.
We propose an automatic simulation-based approach, Sapphire, to recommend optimal configurations.
Results show that Sapphire significantly boosts Ceph performance to 2.2x compared to the default configuration.
arXiv Detail & Related papers (2020-07-07T06:17:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.