LLMind: Orchestrating AI and IoT with LLM for Complex Task Execution
- URL: http://arxiv.org/abs/2312.09007v4
- Date: Fri, 9 Aug 2024 07:07:49 GMT
- Title: LLMind: Orchestrating AI and IoT with LLM for Complex Task Execution
- Authors: Hongwei Cui, Yuyang Du, Qun Yang, Yulin Shao, Soung Chang Liew,
- Abstract summary: We present LLMind, a task-oriented AI framework that enables effective collaboration among IoT devices.
Inspired by the functional specialization theory of the brain, our framework integrates an LLM with domain-specific AI modules.
Complex tasks, which may involve collaborations of multiple domain-specific AI modules and IoT devices, are executed through a control script.
- Score: 18.816077341295628
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Task-oriented communications are an important element in future intelligent IoT systems. Existing IoT systems, however, are limited in their capacity to handle complex tasks, particularly in their interactions with humans to accomplish these tasks. In this paper, we present LLMind, an LLM-based task-oriented AI agent framework that enables effective collaboration among IoT devices, with humans communicating high-level verbal instructions, to perform complex tasks. Inspired by the functional specialization theory of the brain, our framework integrates an LLM with domain-specific AI modules, enhancing its capabilities. Complex tasks, which may involve collaborations of multiple domain-specific AI modules and IoT devices, are executed through a control script generated by the LLM using a Language-Code transformation approach, which first converts language descriptions to an intermediate finite-state machine (FSM) before final precise transformation to code. Furthermore, the framework incorporates a novel experience accumulation mechanism to enhance response speed and effectiveness, allowing the framework to evolve and become progressively sophisticated through continuing user and machine interactions.
Related papers
- Asynchronous Tool Usage for Real-Time Agents [61.3041983544042]
We introduce asynchronous AI agents capable of parallel processing and real-time tool-use.
Our key contribution is an event-driven finite-state machine architecture for agent execution and prompting.
This work presents both a conceptual framework and practical tools for creating AI agents capable of fluid, multitasking interactions.
arXiv Detail & Related papers (2024-10-28T23:57:19Z) - Re-Thinking Process Mining in the AI-Based Agents Era [39.58317527488534]
Large Language Models (LLMs) have emerged as powerful conversational interfaces, and their application in process mining (PM) tasks has shown promising results.
This paper proposes utilizing the AI-Based Agents (AgWf) paradigm to enhance the effectiveness of PM on LLMs.
We examine various implementations of AgWf and the types of AI-based tasks involved.
arXiv Detail & Related papers (2024-08-14T10:14:18Z) - Large Language Models for Base Station Siting: Intelligent Deployment based on Prompt or Agent [62.16747639440893]
Large language models (LLMs) and their associated technologies advance, particularly in the realms of prompt engineering and agent engineering.
This approach entails the strategic use of well-crafted prompts to infuse human experience and knowledge into these sophisticated LLMs.
This integration represents the future paradigm of artificial intelligence (AI) as a service and AI for more ease.
arXiv Detail & Related papers (2024-08-07T08:43:32Z) - Internet of Agents: Weaving a Web of Heterogeneous Agents for Collaborative Intelligence [79.5316642687565]
Existing multi-agent frameworks often struggle with integrating diverse capable third-party agents.
We propose the Internet of Agents (IoA), a novel framework that addresses these limitations.
IoA introduces an agent integration protocol, an instant-messaging-like architecture design, and dynamic mechanisms for agent teaming and conversation flow control.
arXiv Detail & Related papers (2024-07-09T17:33:24Z) - ROS-LLM: A ROS framework for embodied AI with task feedback and structured reasoning [74.58666091522198]
We present a framework for intuitive robot programming by non-experts.
We leverage natural language prompts and contextual information from the Robot Operating System (ROS)
Our system integrates large language models (LLMs), enabling non-experts to articulate task requirements to the system through a chat interface.
arXiv Detail & Related papers (2024-06-28T08:28:38Z) - When Large Language Models Meet Optical Networks: Paving the Way for Automation [17.4503217818141]
We propose a framework of LLM-empowered optical networks, facilitating intelligent control of the physical layer and efficient interaction with the application layer.
The proposed framework is verified on two typical tasks: network alarm analysis and network performance optimization.
The good response accuracies and sematic similarities of 2,400 test situations exhibit the great potential of LLM in optical networks.
arXiv Detail & Related papers (2024-05-14T10:46:33Z) - On the Multi-turn Instruction Following for Conversational Web Agents [83.51251174629084]
We introduce a new task of Conversational Web Navigation, which necessitates sophisticated interactions that span multiple turns with both the users and the environment.
We propose a novel framework, named self-reflective memory-augmented planning (Self-MAP), which employs memory utilization and self-reflection techniques.
arXiv Detail & Related papers (2024-02-23T02:18:12Z) - When Large Language Model Agents Meet 6G Networks: Perception,
Grounding, and Alignment [100.58938424441027]
We propose a split learning system for AI agents in 6G networks leveraging the collaboration between mobile devices and edge servers.
We introduce a novel model caching algorithm for LLMs within the proposed system to improve model utilization in context.
arXiv Detail & Related papers (2024-01-15T15:20:59Z) - LLM-Powered Hierarchical Language Agent for Real-time Human-AI
Coordination [28.22553394518179]
We propose a Hierarchical Language Agent (HLA) for human-AI coordination.
HLA provides both strong reasoning abilities while keeping real-time execution.
Human studies show that HLA outperforms other baseline agents, including slow-mind-only agents and fast-mind-only agents.
arXiv Detail & Related papers (2023-12-23T11:09:48Z) - LLM-Based Human-Robot Collaboration Framework for Manipulation Tasks [4.4589894340260585]
This paper presents a novel approach to enhance autonomous robotic manipulation using the Large Language Model (LLM) for logical inference.
The proposed system combines the advantage of LLM with YOLO-based environmental perception to enable robots to autonomously make reasonable decisions.
arXiv Detail & Related papers (2023-08-29T01:54:49Z) - Language to Rewards for Robotic Skill Synthesis [37.21434094015743]
We introduce a new paradigm that harnesses large language models (LLMs) to define reward parameters that can be optimized and accomplish variety of robotic tasks.
Using reward as the intermediate interface generated by LLMs, we can effectively bridge the gap between high-level language instructions or corrections to low-level robot actions.
arXiv Detail & Related papers (2023-06-14T17:27:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.