LLM Applications: Current Paradigms and the Next Frontier
- URL: http://arxiv.org/abs/2503.04596v2
- Date: Thu, 09 Oct 2025 02:34:39 GMT
- Title: LLM Applications: Current Paradigms and the Next Frontier
- Authors: Xinyi Hou, Yanjie Zhao, Haoyu Wang,
- Abstract summary: The development of large language models (LLMs) has given rise to four major application paradigms.<n>Each has its advantages but also shares common challenges.<n>This paper reviews and analyzes these paradigms, covering architecture design, application ecosystem, research progress, as well as the challenges and open problems they face.
- Score: 8.214897650566494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of large language models (LLMs) has given rise to four major application paradigms: LLM app stores, LLM agents, self-hosted LLM services, and LLM-powered devices. Each has its advantages but also shares common challenges. LLM app stores lower the barrier to development but lead to platform lock-in; LLM agents provide autonomy but lack a unified communication mechanism; self-hosted LLM services enhance control but increase deployment complexity; and LLM-powered devices improve privacy and real-time performance but are limited by hardware. This paper reviews and analyzes these paradigms, covering architecture design, application ecosystem, research progress, as well as the challenges and open problems they face. Based on this, we outline the next frontier of LLM applications, characterizing them through three interconnected layers: infrastructure, protocol, and application. We describe their responsibilities and roles of each layer and demonstrate how to mitigate existing fragmentation limitations and improve security and scalability. Finally, we discuss key future challenges, identify opportunities such as protocol-driven cross-platform collaboration and device integration, and propose a research roadmap for openness, security, and sustainability.
Related papers
- A Roadmap for Tamed Interactions with Large Language Models [5.133046277847902]
We are witnessing a bloom of AI-powered software driven by Large Language Models (LLMs)<n>Although the applications of these LLMs are impressive and seemingly countless, their robustness hinders adoption.<n>With LSL, we aim to address the limitations above by exploring ways to control LLM outputs, enforce structure in interactions, and integrate these aspects with verification, validation, and explainability.
arXiv Detail & Related papers (2025-10-28T13:46:07Z) - Review of Tools for Zero-Code LLM Based Application Development [0.6978180153516672]
Large Language Models (LLMs) are transforming software creation by enabling zero code development platforms.<n>Our survey reviews recent platforms that let users build applications without writing code, by leveraging LLMs as the brains of the development process.
arXiv Detail & Related papers (2025-10-22T16:41:16Z) - Large Language Models in the IoT Ecosystem -- A Survey on Security Challenges and Applications [1.1312948048543685]
The Internet of Things (IoT) and Large Language Models (LLMs) have been two major emerging players in the information technology era.<n>This literature survey explores the current state-of-the-art in applying LLMs within IoT.<n>It emphasizes their applications in various domains/sectors of society, the significant role they play in enhancing IoT security.
arXiv Detail & Related papers (2025-05-23T07:46:27Z) - A Trustworthy Multi-LLM Network: Challenges,Solutions, and A Use Case [59.58213261128626]
We propose a blockchain-enabled collaborative framework that connects multiple Large Language Models (LLMs) into a Trustworthy Multi-LLM Network (MultiLLMN)<n>This architecture enables the cooperative evaluation and selection of the most reliable and high-quality responses to complex network optimization problems.
arXiv Detail & Related papers (2025-05-06T05:32:46Z) - Edge-Cloud Collaborative Computing on Distributed Intelligence and Model Optimization: A Survey [59.52058740470727]
Edge-cloud collaborative computing (ECCC) has emerged as a pivotal paradigm for addressing the computational demands of modern intelligent applications.<n>Recent advancements in AI, particularly deep learning and large language models (LLMs), have dramatically enhanced the capabilities of these distributed systems.<n>This survey provides a structured tutorial on fundamental architectures, enabling technologies, and emerging applications.
arXiv Detail & Related papers (2025-05-03T13:55:38Z) - An LLM-enabled Multi-Agent Autonomous Mechatronics Design Framework [49.633199780510864]
This work proposes a multi-agent autonomous mechatronics design framework, integrating expertise across mechanical design, optimization, electronics, and software engineering.
operating primarily through a language-driven workflow, the framework incorporates structured human feedback to ensure robust performance under real-world constraints.
A fully functional autonomous vessel was developed with optimized propulsion, cost-effective electronics, and advanced control.
arXiv Detail & Related papers (2025-04-20T16:57:45Z) - Datenschutzkonformer LLM-Einsatz: Eine Open-Source-Referenzarchitektur [0.10713888959520207]
We present a reference architecture for developing closed, LLM-based systems using open-source technologies.<n>The architecture provides a flexible and transparent solution that meets strict data privacy and security requirements.
arXiv Detail & Related papers (2025-03-01T14:51:07Z) - Specifications: The missing link to making the development of LLM systems an engineering discipline [65.10077876035417]
We discuss the progress the field has made so far-through advances like structured outputs, process supervision, and test-time compute.<n>We outline several future directions for research to enable the development of modular and reliable LLM-based systems.
arXiv Detail & Related papers (2024-11-25T07:48:31Z) - Transforming the Hybrid Cloud for Emerging AI Workloads [81.15269563290326]
This white paper envisions transforming hybrid cloud systems to meet the growing complexity of AI workloads.
The proposed framework addresses critical challenges in energy efficiency, performance, and cost-effectiveness.
This joint initiative aims to establish hybrid clouds as secure, efficient, and sustainable platforms.
arXiv Detail & Related papers (2024-11-20T11:57:43Z) - Large Language Model Supply Chain: Open Problems From the Security Perspective [25.320736806895976]
Large Language Model (LLM) is changing the software development paradigm and has gained huge attention from both academia and industry.
We take the first step to discuss the potential security risks in each component as well as the integration between components of LLM SC.
arXiv Detail & Related papers (2024-11-03T15:20:21Z) - From LLMs to LLM-based Agents for Software Engineering: A Survey of Current, Challenges and Future [15.568939568441317]
We investigate the current practice and solutions for large language models (LLMs) and LLM-based agents for software engineering.<n>In particular we summarise six key topics: requirement engineering, code generation, autonomous decision-making, software design, test generation, and software maintenance.<n>We discuss the models and benchmarks used, providing a comprehensive analysis of their applications and effectiveness in software engineering.
arXiv Detail & Related papers (2024-08-05T14:01:15Z) - A General-Purpose Device for Interaction with LLMs [3.052172365469752]
This paper investigates integrating large language models (LLMs) with advanced hardware.
We focus on developing a general-purpose device designed for enhanced interaction with LLMs.
arXiv Detail & Related papers (2024-08-02T23:43:29Z) - The Synergy between Data and Multi-Modal Large Language Models: A Survey from Co-Development Perspective [53.48484062444108]
We find that the development of models and data is not two separate paths but rather interconnected.
On the one hand, vaster and higher-quality data contribute to better performance of MLLMs; on the other hand, MLLMs can facilitate the development of data.
To promote the data-model co-development for MLLM community, we systematically review existing works related to MLLMs from the data-model co-development perspective.
arXiv Detail & Related papers (2024-07-11T15:08:11Z) - Mobile Edge Intelligence for Large Language Models: A Contemporary Survey [32.22789677882933]
On-device large language models (LLMs) are more cost-effective, latency-efficient, and privacy-preserving compared with the cloud paradigm.<n>Mobile edge intelligence (MEI) presents a viable solution by provisioning AI capabilities at the edge of mobile networks.<n>This article provides a contemporary survey on harnessing MEI for LLMs.
arXiv Detail & Related papers (2024-07-09T13:47:05Z) - Q*: Improving Multi-step Reasoning for LLMs with Deliberative Planning [53.6472920229013]
Large Language Models (LLMs) have demonstrated impressive capability in many natural language tasks.
LLMs are prone to produce errors, hallucinations and inconsistent statements when performing multi-step reasoning.
We introduce Q*, a framework for guiding LLMs decoding process with deliberative planning.
arXiv Detail & Related papers (2024-06-20T13:08:09Z) - Efficient Prompting for LLM-based Generative Internet of Things [88.84327500311464]
Large language models (LLMs) have demonstrated remarkable capacities on various tasks, and integrating the capacities of LLMs into the Internet of Things (IoT) applications has drawn much research attention recently.
Due to security concerns, many institutions avoid accessing state-of-the-art commercial LLM services, requiring the deployment and utilization of open-source LLMs in a local network setting.
We propose a LLM-based Generative IoT (GIoT) system deployed in the local network setting in this study.
arXiv Detail & Related papers (2024-06-14T19:24:00Z) - LLMs as On-demand Customizable Service [8.440060524215378]
We introduce a concept of hierarchical, distributed Large Language Models (LLMs)
By introducing a "layered" approach, the proposed architecture enables on-demand accessibility to LLMs as a customizable service.
We envision that the concept of hierarchical LLM will empower extensive, crowd-sourced user bases to harness the capabilities of LLMs.
arXiv Detail & Related papers (2024-01-29T21:24:10Z) - The Security and Privacy of Mobile Edge Computing: An Artificial Intelligence Perspective [64.36680481458868]
Mobile Edge Computing (MEC) is a new computing paradigm that enables cloud computing and information technology (IT) services to be delivered at the network's edge.
This paper provides a survey of security and privacy in MEC from the perspective of Artificial Intelligence (AI)
We focus on new security and privacy issues, as well as potential solutions from the viewpoints of AI.
arXiv Detail & Related papers (2024-01-03T07:47:22Z) - Video Understanding with Large Language Models: A Survey [107.7736911322462]
Given the remarkable capabilities of large language models (LLMs) in language and multimodal tasks, this survey provides a detailed overview of recent advancements in video understanding.<n>The emergent capabilities Vid-LLMs are surprisingly advanced, particularly their ability for open-ended multi-granularity reasoning.<n>This survey presents a comprehensive study of the tasks, datasets, benchmarks, and evaluation methodologies for Vid-LLMs.
arXiv Detail & Related papers (2023-12-29T01:56:17Z) - LLM as OS, Agents as Apps: Envisioning AIOS, Agents and the AIOS-Agent
Ecosystem [48.81136793994758]
Large Language Model (LLM) serves as the (Artificial) Intelligent Operating System (IOS), or AIOS--an operating system "with soul"
We envision that LLM's impact will not be limited to the AI application level, instead, it will in turn revolutionize the design and implementation of computer system, architecture, software, and programming language.
arXiv Detail & Related papers (2023-12-06T18:50:26Z) - LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins [31.678328189420483]
Large language model (LLM) platforms have recently begun offering an app ecosystem to interface with third-party services on the internet.
While these apps extend the capabilities of LLM platforms, they are developed by arbitrary third parties and thus cannot be implicitly trusted.
We propose a framework that lays a foundation for LLM platform designers to analyze and improve the security, privacy, and safety of current and future third-party integrated LLM platforms.
arXiv Detail & Related papers (2023-09-19T02:20:10Z) - Enhancing Architecture Frameworks by Including Modern Stakeholders and their Views/Viewpoints [48.87872564630711]
The stakeholders with data science and Machine Learning related concerns, such as data scientists and data engineers, are yet to be included in existing architecture frameworks.<n>We surveyed 61 subject matter experts from over 25 organizations in 10 countries.
arXiv Detail & Related papers (2023-08-09T21:54:34Z) - VEDLIoT -- Next generation accelerated AIoT systems and applications [4.964750143168832]
The VEDLIoT project aims to develop energy-efficient Deep Learning methodologies for distributed Artificial Intelligence of Things (AIoT) applications.
We propose a holistic approach that focuses on optimizing algorithms while addressing safety and security challenges inherent to AIoT systems.
arXiv Detail & Related papers (2023-05-09T12:35:00Z) - Check Your Facts and Try Again: Improving Large Language Models with
External Knowledge and Automated Feedback [127.75419038610455]
Large language models (LLMs) are able to generate human-like, fluent responses for many downstream tasks.
This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules.
arXiv Detail & Related papers (2023-02-24T18:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.