LLM-Based Emulation of the Radio Resource Control Layer: Towards AI-Native RAN Protocols
- URL: http://arxiv.org/abs/2505.16821v3
- Date: Fri, 07 Nov 2025 09:20:34 GMT
- Title: LLM-Based Emulation of the Radio Resource Control Layer: Towards AI-Native RAN Protocols
- Authors: Ziming Liu, Bryan Liu, Alvaro Valcarce, Xiaoli Chu,
- Abstract summary: Large AI Models (LAMs) are key enablers of the AI-Native Air Interface (AI-AI)<n>This paper presents the first standards-compliant emulation of the Radio Resource Control layer using a decoder-only LAM.<n>Results demonstrate that LAMs, when augmented with protocol-aware reasoning, can directly orchestrate control-plane procedures.
- Score: 28.04609776570199
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Integrating Large AI Models (LAMs) into 6G mobile networks is a key enabler of the AI-Native Air Interface (AI-AI), where protocol intelligence must scale beyond handcrafted logic. This paper presents, to our knowledge, the first standards-compliant emulation of the Radio Resource Control (RRC) layer using a decoder-only LAM (LLAMA-class) fine-tuned with Low-Rank Adaptation (LoRA) on a multi-vendor corpus of real-world traces spanning both 5G and 4G systems. We treat RRC as a domain-specific language and construct a segmentation-safe, question--answer (Question-and-Answer (QA)) dataset that preserves Abstract Syntax Notation (ASN.1) structure through linearization prior to Byte Pair Encoding (BPE) tokenization. The proposed approach combines parameter-efficient adaptation with schema-bounded prompting to ensure syntactic and procedural fidelity. Evaluation introduces a standards-aware triad -- ASN.1 conformance, field-level coverage analysis, and uplink-to-downlink state-machine checks -- alongside semantic similarity and latency profiling across 120 configurations. On 30k 5G request--response pairs plus an additional 4.8k QA turns from 4G sessions, our 8B model achieves a median cosine similarity of 0.97, a 61% relative gain over a zero-shot baseline, while sustaining high conformance rates. These results demonstrate that LAMs, when augmented with protocol-aware reasoning, can directly orchestrate control-plane procedures, laying the foundation for the future Artificial Intelligence (AI)-native Radio Access Network (RAN).
Related papers
- Federated Agentic AI for Wireless Networks: Fundamentals, Approaches, and Applications [60.721304295812445]
Federated learning (FL) has the potential to improve the overall loop of agentic AI.<n>We first summarize fundamentals of agentic AI and mainstream FL types. Then, we illustrate how each FL type can strengthen a specific component of agentic AI's loop.<n>We conduct a case study on using FRL to improve the performance of agentic AI's action decision in low-altitude wireless networks.
arXiv Detail & Related papers (2026-03-02T11:26:56Z) - MCP-Diag: A Deterministic, Protocol-Driven Architecture for AI-Native Network Diagnostics [0.08921166277011344]
This paper introduces MCP-Diag, a hybrid neuro-symbolic architecture built upon the Model Context Protocol (MCP)<n>We propose a deterministic translation layer that converts rawout from canonical utilities (dig, ping, traceroute) into rigorous schemas before AI ingestion.<n>We also introduce a mandatory "Elicitation Loop" that enforces Human-in-the-Loop (HITL) authorization at the protocol level.
arXiv Detail & Related papers (2026-01-30T06:49:25Z) - LiQSS: Post-Transformer Linear Quantum-Inspired State-Space Tensor Networks for Real-Time 6G [85.58816960936069]
Proactive and agentic control in Sixth-Generation (6G) Open Radio Access Networks (O-RAN) requires control-grade prediction under stringent Near-Time (Near-RT) latency and computational constraints.<n>This paper investigates a post-Transformer paradigm for efficient radio telemetry forecasting.<n>We propose a quantum-inspired state-space tensor network that replaces self-attention with stable structured state-space dynamics kernels.
arXiv Detail & Related papers (2026-01-18T12:08:38Z) - NNGPT: Rethinking AutoML with Large Language Models [36.90850535125572]
NNGPT is an open-source framework that turns a large language model (LLM) into a self-improving AutoML engine for neural network development.<n>It integrates within one unified workflow five synergistic LLM-based pipelines: zero-shot architecture synthesis, hyper parameter optimization, code-aware accuracy/early-stop prediction, and reinforcement learning.<n>The system has already generated over 5K validated models, proving NNGPT as an autonomous AutoML engine.
arXiv Detail & Related papers (2025-11-25T14:10:44Z) - Leveraging AI Agents for Autonomous Networks: A Reference Architecture and Empirical Studies [18.534083337294188]
This work bridges the gap between architectural theory and operational reality by implementing Joseph Sifakis's AN Agent reference architecture in a functional cognitive system.<n>We demonstrate sub-10 ms real-time control in 5G NR sub-6 GHz while achieving 6% higher downlink throughput than Outer Loop Link Adaptation (OLLA) algorithms.<n>These improvements confirm the architecture's viability in overcoming traditional autonomy barriers and advancing critical L4-enabling capabilities toward next-generation objectives.
arXiv Detail & Related papers (2025-09-10T06:24:57Z) - AI/ML Life Cycle Management for Interoperable AI Native RAN [50.61227317567369]
Artificial intelligence (AI) and machine learning (ML) models are rapidly permeating the 5G Radio Access Network (RAN)<n>These developments lay the foundation for AI-native transceivers as a key enabler for 6G.
arXiv Detail & Related papers (2025-07-24T16:04:59Z) - Contrastive Language-Image Pre-Training Model based Semantic Communication Performance Optimization [23.352326993320958]
A novel language-image pre-training (CLIP) model based semantic communication framework is designed.<n>Compared to standard neural network (e.g.,convolutional neural network) based semantic encoders and decoders that require joint training over a common dataset, our CLIP model based method does not require any training procedures.<n>We investigate the deployment of the CLIP model based semantic framework over a noisy wireless network.
arXiv Detail & Related papers (2025-07-10T01:48:56Z) - ORAN-GUIDE: RAG-Driven Prompt Learning for LLM-Augmented Reinforcement Learning in O-RAN Network Slicing [5.62872273155603]
We propose textitORAN-GUIDE, a dual-LLM framework that enhances multi-agent (MARL) with task-relevant, semantically enriched state representations.<n>Results show that ORAN-GUIDE improves sample efficiency, policy convergence, and performance generalization over standard MARL and single-LLM baselines.
arXiv Detail & Related papers (2025-05-31T14:21:19Z) - QiMeng-CodeV-R1: Reasoning-Enhanced Verilog Generation [51.393569044134445]
Large language models (LLMs) trained via reinforcement learning with verifiable reward (RLVR) have achieved breakthroughs on tasks with explicit, automatable verification.<n> Extending RLVR to automatically generating hardware description languages (HDLs) like Verilog from natural-language (NL) specifications, however, poses three key challenges.<n>We introduce CodeV-R1, an RLVR framework for training Verilog generation LLMs.
arXiv Detail & Related papers (2025-05-30T03:51:06Z) - Plug-and-Play AMC: Context Is King in Training-Free, Open-Set Modulation with LLMs [22.990537822143907]
Automatic Modulation Classification (AMC) is critical for efficient spectrum management and robust wireless communications.<n>We propose an innovative framework that integrates traditional signal processing techniques with Large-Language Models.<n>This work lays the foundation for scalable, interpretable, and versatile signal classification systems in next-generation wireless networks.
arXiv Detail & Related papers (2025-05-06T02:07:47Z) - FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression [55.992528247880685]
Decentralized training faces significant challenges regarding system design and efficiency.
We present FusionLLM, a decentralized training system designed and implemented for training large deep neural networks (DNNs)
We show that our system and method can achieve 1.45 - 9.39x speedup compared to baseline methods while ensuring convergence.
arXiv Detail & Related papers (2024-10-16T16:13:19Z) - Toward 6G Native-AI Network: Foundation Model based Cloud-Edge-End Collaboration Framework [55.73948386625618]
We analyze the challenges of achieving 6G native AI from perspectives of data, AI models, and operational paradigm.<n>We propose a 6G native AI framework based on foundation models, provide an integration method for the expert knowledge, present the customization for two kinds of PFM, and outline a novel operational paradigm for the native AI framework.
arXiv Detail & Related papers (2023-10-26T15:19:40Z) - Safe and Accelerated Deep Reinforcement Learning-based O-RAN Slicing: A
Hybrid Transfer Learning Approach [20.344810727033327]
We propose and design a hybrid TL-aided approach to provide safe and accelerated convergence in DRL-based O-RAN slicing.
The proposed hybrid approach shows at least: 7.7% and 20.7% improvements in the average initial reward value and the percentage of converged scenarios.
arXiv Detail & Related papers (2023-09-13T18:58:34Z) - Deep Learning-Based Rate-Splitting Multiple Access for Reconfigurable
Intelligent Surface-Aided Tera-Hertz Massive MIMO [56.022764337221325]
Reconfigurable intelligent surface (RIS) can significantly enhance the service coverage of Tera-Hertz massive multiple-input multiple-output (MIMO) communication systems.
However, obtaining accurate high-dimensional channel state information (CSI) with limited pilot and feedback signaling overhead is challenging.
This paper proposes a deep learning (DL)-based rate-splitting multiple access scheme for RIS-aided Tera-Hertz multi-user multiple access systems.
arXiv Detail & Related papers (2022-09-18T03:07:37Z) - Model-based Deep Learning Receiver Design for Rate-Splitting Multiple
Access [65.21117658030235]
This work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods.
The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS) and average training overhead.
Results reveal that the MBDL outperforms by a significant margin the SIC receiver with imperfect CSIR.
arXiv Detail & Related papers (2022-05-02T12:23:55Z) - Speech recognition for air traffic control via feature learning and
end-to-end training [8.755785876395363]
We propose a new automatic speech recognition (ASR) system based on feature learning and an end-to-end training procedure for air traffic control (ATC) systems.
The proposed model integrates the feature learning block, recurrent neural network (RNN), and connectionist temporal classification loss.
Thanks to the ability to learn representations from raw waveforms, the proposed model can be optimized in a complete end-to-end manner.
arXiv Detail & Related papers (2021-11-04T06:38:21Z) - ATCSpeechNet: A multilingual end-to-end speech recognition framework for
air traffic control systems [15.527854608553824]
ATCSpeechNet is proposed to tackle the issue of translating communication speech into human-readable text in air traffic control systems.
An end-to-end paradigm is developed to convert speech waveform into text directly, without any feature engineering or lexicon.
Experimental results on the ATCSpeech corpus demonstrate that the proposed approach achieves a high performance with a very small labeled corpus.
arXiv Detail & Related papers (2021-02-17T02:27:09Z) - Path Design and Resource Management for NOMA enhanced Indoor Intelligent
Robots [58.980293789967575]
A communication enabled indoor intelligent robots (IRs) service framework is proposed.
Lego modeling method is proposed, which can deterministically describe the indoor layout and channel state.
The investigated radio map is invoked as a virtual environment to train the reinforcement learning agent.
arXiv Detail & Related papers (2020-11-23T21:45:01Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z) - Taurus: A Data Plane Architecture for Per-Packet ML [59.1343317736213]
We present the design and implementation of Taurus, a data plane for line-rate inference.
Our evaluation of a Taurus switch ASIC shows that Taurus operates orders of magnitude faster than a server-based control plane.
arXiv Detail & Related papers (2020-02-12T09:18:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.