LLM-based policy generation for intent-based management of applications
- URL: http://arxiv.org/abs/2402.10067v1
- Date: Mon, 22 Jan 2024 15:37:04 GMT
- Title: LLM-based policy generation for intent-based management of applications
- Authors: Kristina Dzeparoska, Jieyu Lin, Ali Tizghadam, Alberto Leon-Garcia
- Abstract summary: We propose a pipeline that progressively decomposes intents by generating the required actions using a policy-based abstraction.
This allows us to automate the policy execution by creating a closed control loop for the intent deployment.
We evaluate our proposal with a use-case to fulfill and assure an application service chain of virtual network functions.
- Score: 8.938462415711674
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Automated management requires decomposing high-level user requests, such as
intents, to an abstraction that the system can understand and execute. This is
challenging because even a simple intent requires performing a number of
ordered steps. And the task of identifying and adapting these steps (as
conditions change) requires a decomposition approach that cannot be exactly
pre-defined beforehand. To tackle these challenges and support automated intent
decomposition and execution, we explore the few-shot capability of Large
Language Models (LLMs). We propose a pipeline that progressively decomposes
intents by generating the required actions using a policy-based abstraction.
This allows us to automate the policy execution by creating a closed control
loop for the intent deployment. To do so, we generate and map the policies to
APIs and form application management loops that perform the necessary
monitoring, analysis, planning and execution. We evaluate our proposal with a
use-case to fulfill and assure an application service chain of virtual network
functions. Using our approach, we can generalize and generate the necessary
steps to realize intents, thereby enabling intent automation for application
management.
Related papers
- Inferring Implicit Goals Across Differing Task Models [20.725482497743865]
The existence of implicit requirements could be common in settings where the user's understanding of the task model may differ from the agent's estimate of the model.
This paper addresses such expectation mismatch by capturing the possibility of unspecified user subgoal in the context of a task captured as a Markov Decision Process (MDP) and querying for it as required.
arXiv Detail & Related papers (2025-01-29T15:20:43Z) - Keeping Behavioral Programs Alive: Specifying and Executing Liveness Requirements [2.4387555567462647]
We propose an idiom for tagging states with "must-finish," indicating that tasks are yet to be completed.
We also offer semantics and two execution mechanisms, one based on a translation to B"uchi automata and the other based on a Markov decision process (MDP)
arXiv Detail & Related papers (2024-04-02T11:36:58Z) - Intent Assurance using LLMs guided by Intent Drift [5.438862991585019]
Intent-Based Networking (IBN) promises to align intents and business objectives with network operations--in an automated manner.
In this paper, we define an assurance framework that allows us to detect and act when intent drift occurs.
We leverage AI-driven policies, generated by Large Language Models (LLMs), which can quickly learn the necessary in-context requirements.
arXiv Detail & Related papers (2024-02-01T16:09:19Z) - Unified Task and Motion Planning using Object-centric Abstractions of
Motion Constraints [56.283944756315066]
We propose an alternative TAMP approach that unifies task and motion planning into a single search.
Our approach is based on an object-centric abstraction of motion constraints that permits leveraging the computational efficiency of off-the-shelf AI search to yield physically feasible plans.
arXiv Detail & Related papers (2023-12-29T14:00:20Z) - Code Models are Zero-shot Precondition Reasoners [83.8561159080672]
We use code representations to reason about action preconditions for sequential decision making tasks.
We propose a precondition-aware action sampling strategy that ensures actions predicted by a policy are consistent with preconditions.
arXiv Detail & Related papers (2023-11-16T06:19:27Z) - Interactive Task Planning with Language Models [89.5839216871244]
An interactive robot framework accomplishes long-horizon task planning and can easily generalize to new goals and distinct tasks, even during execution.
Recent large language model based approaches can allow for more open-ended planning but often require heavy prompt engineering or domain specific pretrained models.
We propose a simple framework that achieves interactive task planning with language models by incorporating both high-level planning and low-level skill execution.
arXiv Detail & Related papers (2023-10-16T17:59:12Z) - You Only Look at Screens: Multimodal Chain-of-Action Agents [37.118034745972956]
Auto-GUI is a multimodal solution that directly interacts with the interface.
We propose a chain-of-action technique to help the agent decide what action to execute.
We evaluate our approach on a new device-control benchmark AITW with 30$K$ unique instructions.
arXiv Detail & Related papers (2023-09-20T16:12:32Z) - Imitating Graph-Based Planning with Goal-Conditioned Policies [72.61631088613048]
We present a self-imitation scheme which distills a subgoal-conditioned policy into the target-goal-conditioned policy.
We empirically show that our method can significantly boost the sample-efficiency of the existing goal-conditioned RL methods.
arXiv Detail & Related papers (2023-03-20T14:51:10Z) - Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in
Latent Space [76.46113138484947]
General-purpose robots require diverse repertoires of behaviors to complete challenging tasks in real-world unstructured environments.
To address this issue, goal-conditioned reinforcement learning aims to acquire policies that can reach goals for a wide range of tasks on command.
We propose Planning to Practice, a method that makes it practical to train goal-conditioned policies for long-horizon tasks.
arXiv Detail & Related papers (2022-05-17T06:58:17Z) - Certified Reinforcement Learning with Logic Guidance [78.2286146954051]
We propose a model-free RL algorithm that enables the use of Linear Temporal Logic (LTL) to formulate a goal for unknown continuous-state/action Markov Decision Processes (MDPs)
The algorithm is guaranteed to synthesise a control policy whose traces satisfy the specification with maximal probability.
arXiv Detail & Related papers (2019-02-02T20:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.