Pushing Large Language Models to the 6G Edge: Vision, Challenges, and
Opportunities
- URL: http://arxiv.org/abs/2309.16739v3
- Date: Mon, 4 Mar 2024 12:17:16 GMT
- Title: Pushing Large Language Models to the 6G Edge: Vision, Challenges, and
Opportunities
- Authors: Zheng Lin, Guanqiao Qu, Qiyuan Chen, Xianhao Chen, Zhe Chen and Kaibin
Huang
- Abstract summary: Large language models (LLMs) are revolutionizing AI development and potentially shaping our future.
The status quo cloud-based deployment faces some critical challenges: 1) long response time; 2) high bandwidth costs; and 3) the violation of data privacy.
6G mobile edge computing (MEC) systems may resolve these pressing issues.
This article serves as a position paper for thoroughly identifying the motivation, challenges, and pathway for empowering LLMs at the 6G edge.
- Score: 32.035405009895264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs), which have shown remarkable capabilities, are
revolutionizing AI development and potentially shaping our future. However,
given their multimodality, the status quo cloud-based deployment faces some
critical challenges: 1) long response time; 2) high bandwidth costs; and 3) the
violation of data privacy. 6G mobile edge computing (MEC) systems may resolve
these pressing issues. In this article, we explore the potential of deploying
LLMs at the 6G edge. We start by introducing killer applications powered by
multimodal LLMs, including robotics and healthcare, to highlight the need for
deploying LLMs in the vicinity of end users. Then, we identify the critical
challenges for LLM deployment at the edge and envision the 6G MEC architecture
for LLMs. Furthermore, we delve into two design aspects, i.e., edge training
and edge inference for LLMs. In both aspects, considering the inherent resource
limitations at the edge, we discuss various cutting-edge techniques, including
split learning/inference, parameter-efficient fine-tuning, quantization, and
parameter-sharing inference, to facilitate the efficient deployment of LLMs.
This article serves as a position paper for thoroughly identifying the
motivation, challenges, and pathway for empowering LLMs at the 6G edge.
Related papers
- When Machine Unlearning Meets Retrieval-Augmented Generation (RAG): Keep Secret or Forget Knowledge? [15.318301783084681]
Large language models (LLMs) can inadvertently learn and retain sensitive information and harmful content during training.
We propose a lightweight unlearning framework based on Retrieval-Augmented Generation (RAG) technology.
We evaluate our framework through extensive experiments on both open-source and closed-source models, including ChatGPT, Gemini, Llama-2-7b-chat-hf, and PaLM 2.
arXiv Detail & Related papers (2024-10-20T03:51:01Z) - From LLMs to LLM-based Agents for Software Engineering: A Survey of Current, Challenges and Future [15.568939568441317]
We investigate the current practice and solutions for large language models (LLMs) and LLM-based agents for software engineering.
In particular we summarise six key topics: requirement engineering, code generation, autonomous decision-making, software design, test generation, and software maintenance.
We discuss the models and benchmarks used, providing a comprehensive analysis of their applications and effectiveness in software engineering.
arXiv Detail & Related papers (2024-08-05T14:01:15Z) - Mobile Edge Intelligence for Large Language Models: A Contemporary Survey [32.22789677882933]
Mobile edge intelligence (MEI) provides AI capabilities within the edge of mobile networks with improved privacy and latency relative to cloud computing.
MEI sits between on-device AI and cloud-based AI, featuring wireless communications and more powerful computing resources than end devices.
This article provides a contemporary survey on harnessing MEI for LLMs.
arXiv Detail & Related papers (2024-07-09T13:47:05Z) - New Solutions on LLM Acceleration, Optimization, and Application [14.995654657013741]
Large Language Models (LLMs) have become extremely potent instruments with exceptional capacities for comprehending and producing human-like text in a range of applications.
However, the increasing size and complexity of LLMs present significant challenges in both training and deployment.
We provide a review of recent advancements and research directions aimed at addressing these challenges.
arXiv Detail & Related papers (2024-06-16T11:56:50Z) - Efficient Prompting for LLM-based Generative Internet of Things [88.84327500311464]
Large language models (LLMs) have demonstrated remarkable capacities on various tasks, and integrating the capacities of LLMs into the Internet of Things (IoT) applications has drawn much research attention recently.
Due to security concerns, many institutions avoid accessing state-of-the-art commercial LLM services, requiring the deployment and utilization of open-source LLMs in a local network setting.
We propose a LLM-based Generative IoT (GIoT) system deployed in the local network setting in this study.
arXiv Detail & Related papers (2024-06-14T19:24:00Z) - Large Language Models (LLMs) Assisted Wireless Network Deployment in Urban Settings [0.21847754147782888]
Large Language Models (LLMs) have revolutionized language understanding and human-like text generation.
This paper explores new techniques to harness the power of LLMs for 6G (6th Generation) wireless communication technologies.
We introduce a novel Reinforcement Learning (RL) based framework that leverages LLMs for network deployment in wireless communications.
arXiv Detail & Related papers (2024-05-22T05:19:51Z) - Knowledge Fusion of Large Language Models [73.28202188100646]
This paper introduces the notion of knowledge fusion for large language models (LLMs)
We externalize their collective knowledge and unique strengths, thereby elevating the capabilities of the target model beyond those of any individual source LLM.
Our findings confirm that the fusion of LLMs can improve the performance of the target model across a range of capabilities such as reasoning, commonsense, and code generation.
arXiv Detail & Related papers (2024-01-19T05:02:46Z) - Video Understanding with Large Language Models: A Survey [97.29126722004949]
Given the remarkable capabilities of large language models (LLMs) in language and multimodal tasks, this survey provides a detailed overview of recent advancements in video understanding.
The emergent capabilities Vid-LLMs are surprisingly advanced, particularly their ability for open-ended multi-granularity reasoning.
This survey presents a comprehensive study of the tasks, datasets, benchmarks, and evaluation methodologies for Vid-LLMs.
arXiv Detail & Related papers (2023-12-29T01:56:17Z) - Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly [62.473245910234304]
This paper takes a hardware-centric approach to explore how Large Language Models can be brought to modern edge computing systems.
We provide a micro-level hardware benchmark, compare the model FLOP utilization to a state-of-the-art data center GPU, and study the network utilization in realistic conditions.
arXiv Detail & Related papers (2023-10-04T20:27:20Z) - How Can Recommender Systems Benefit from Large Language Models: A Survey [82.06729592294322]
Large language models (LLM) have shown impressive general intelligence and human-like capabilities.
We conduct a comprehensive survey on this research direction from the perspective of the whole pipeline in real-world recommender systems.
arXiv Detail & Related papers (2023-06-09T11:31:50Z) - LLM-Pruner: On the Structural Pruning of Large Language Models [65.02607075556742]
Large language models (LLMs) have shown remarkable capabilities in language understanding and generation.
We tackle the compression of LLMs within the bound of two constraints: being task-agnostic and minimizing the reliance on the original training dataset.
Our method, named LLM-Pruner, adopts structural pruning that selectively removes non-critical coupled structures.
arXiv Detail & Related papers (2023-05-19T12:10:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.