Large AI Models for Wireless Physical Layer
- URL: http://arxiv.org/abs/2508.02314v1
- Date: Mon, 04 Aug 2025 11:30:33 GMT
- Title: Large AI Models for Wireless Physical Layer
- Authors: Jiajia Guo, Yiming Cui, Shi Jin, Jun Zhang,
- Abstract summary: Large artificial intelligence models (LAMs) are transforming wireless physical layer technologies through their robust generalization, multitask processing, and multimodal capabilities.<n>This article reviews recent advancements in LAM applications for physical layer communications, addressing limitations of conventional AI-based approaches.
- Score: 44.387744480376284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large artificial intelligence models (LAMs) are transforming wireless physical layer technologies through their robust generalization, multitask processing, and multimodal capabilities. This article reviews recent advancements in LAM applications for physical layer communications, addressing limitations of conventional AI-based approaches. LAM applications are classified into two strategies: leveraging pre-trained LAMs and developing native LAMs designed specifically for physical layer tasks. The motivations and key frameworks of these approaches are comprehensively examined through multiple use cases. Both strategies significantly improve performance and adaptability across diverse wireless scenarios. Future research directions, including efficient architectures, interpretability, standardized datasets, and collaboration between large and small models, are proposed to advance LAM-based physical layer solutions for next-generation communication systems.
Related papers
- Large AI Model-Enabled Secure Communications in Low-Altitude Wireless Networks: Concepts, Perspectives and Case Study [92.15255222408636]
Low-altitude wireless networks (LAWNs) have the potential to revolutionize communications by supporting a range of applications.<n>We investigate some large artificial intelligence model (LAM)-enabled solutions for secure communications in LAWNs.<n>To demonstrate the practical benefits of LAMs for secure communications in LAWNs, we propose a novel LAM-based optimization framework.
arXiv Detail & Related papers (2025-08-01T01:53:58Z) - From Large AI Models to Agentic AI: A Tutorial on Future Intelligent Communications [57.38526350775472]
This tutorial provides a systematic introduction to the principles, design, and applications of Large Artificial Intelligence Models (LAMs) and Agentic AI technologies.<n>We outline the background of 6G communications, review the technological evolution from LAMs to Agentic AI, and clarify the tutorial's motivation and main contributions.
arXiv Detail & Related papers (2025-05-28T12:54:07Z) - Towards a Foundation Model for Communication Systems [16.85529517183343]
In this work, we take a step toward a foundation model for communication data.<n>We propose methodologies to address key challenges, including tokenization, positional embedding, multimodality, variable feature sizes, and normalization.<n>We empirically demonstrate that such a model can successfully estimate multiple features, including transmission rank, selected precoder, Doppler spread, and delay profile.
arXiv Detail & Related papers (2025-05-20T16:52:11Z) - Edge Large AI Models: Revolutionizing 6G Networks [38.62140334987825]
Large artificial intelligence models (LAMs) possess human-like abilities to solve a wide range of real-world problems.<n> edge LAM emerges as an enabling technology to empower the delivery of various real-time intelligent services in 6G.
arXiv Detail & Related papers (2025-05-01T05:44:00Z) - Multi-Task Semantic Communications via Large Models [42.42961176008125]
We propose a LAM-based multi-task SemCom architecture, which includes an adaptive model compression strategy and a federated split fine-tuning approach.<n>Retrieval-augmented generation scheme is implemented to synthesize the most recent local and global knowledge bases.
arXiv Detail & Related papers (2025-03-28T00:57:34Z) - Inference Optimization of Foundation Models on AI Accelerators [68.24450520773688]
Powerful foundation models, including large language models (LLMs), with Transformer architectures have ushered in a new era of Generative AI.
As the number of model parameters reaches to hundreds of billions, their deployment incurs prohibitive inference costs and high latency in real-world scenarios.
This tutorial offers a comprehensive discussion on complementary inference optimization techniques using AI accelerators.
arXiv Detail & Related papers (2024-07-12T09:24:34Z) - Generative AI Agents with Large Language Model for Satellite Networks via a Mixture of Experts Transmission [74.10928850232717]
This paper develops generative artificial intelligence (AI) agents for model formulation and then applies a mixture of experts (MoE) to design transmission strategies.
Specifically, we leverage large language models (LLMs) to build an interactive modeling paradigm.
We propose an MoE-proximal policy optimization (PPO) approach to solve the formulated problem.
arXiv Detail & Related papers (2024-04-14T03:44:54Z) - Multi-Agent Reinforcement Learning for Power Control in Wireless
Networks via Adaptive Graphs [1.1861167902268832]
Multi-agent deep reinforcement learning (MADRL) has emerged as a promising method to address a wide range of complex optimization problems like power control.
We present the use of graphs as communication-inducing structures among distributed agents as an effective means to mitigate these challenges.
arXiv Detail & Related papers (2023-11-27T14:25:40Z) - Model-Based Machine Learning for Communications [110.47840878388453]
We review existing strategies for combining model-based algorithms and machine learning from a high level perspective.
We focus on symbol detection, which is one of the fundamental tasks of communication receivers.
arXiv Detail & Related papers (2021-01-12T19:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.