Intent Assurance using LLMs guided by Intent Drift
- URL: http://arxiv.org/abs/2402.00715v2
- Date: Fri, 2 Feb 2024 23:08:12 GMT
- Title: Intent Assurance using LLMs guided by Intent Drift
- Authors: Kristina Dzeparoska, Ali Tizghadam, Alberto Leon-Garcia
- Abstract summary: Intent-Based Networking (IBN) promises to align intents and business objectives with network operations--in an automated manner.
In this paper, we define an assurance framework that allows us to detect and act when intent drift occurs.
We leverage AI-driven policies, generated by Large Language Models (LLMs), which can quickly learn the necessary in-context requirements.
- Score: 5.438862991585019
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Intent-Based Networking (IBN) presents a paradigm shift for network
management, by promising to align intents and business objectives with network
operations--in an automated manner. However, its practical realization is
challenging: 1) processing intents, i.e., translate, decompose and identify the
logic to fulfill the intent, and 2) intent conformance, that is, considering
dynamic networks, the logic should be adequately adapted to assure intents. To
address the latter, intent assurance is tasked with continuous verification and
validation, including taking the necessary actions to align the operational and
target states. In this paper, we define an assurance framework that allows us
to detect and act when intent drift occurs. To do so, we leverage AI-driven
policies, generated by Large Language Models (LLMs) which can quickly learn the
necessary in-context requirements, and assist with the fulfillment and
assurance of intents.
Related papers
- Auto-Intent: Automated Intent Discovery and Self-Exploration for Large Language Model Web Agents [68.22496852535937]
We introduce Auto-Intent, a method to adapt a pre-trained large language model (LLM) as an agent for a target domain without direct fine-tuning.
Our approach first discovers the underlying intents from target domain demonstrations unsupervisedly.
We train our intent predictor to predict the next intent given the agent's past observations and actions.
arXiv Detail & Related papers (2024-10-29T21:37:04Z) - Online Learning for Autonomous Management of Intent-based 6G Networks [39.135195293229444]
We propose an online learning method based on the hierarchical multi-armed bandits approach for an effective management of intent-based networking.
We show that our algorithm is an effective approach regarding resource allocation and satisfaction of intent expectations.
arXiv Detail & Related papers (2024-07-25T04:48:56Z) - Intent Profiling and Translation Through Emergent Communication [30.44616418991389]
We propose an AI-based framework for intent profiling and translation.
We consider a scenario where applications interacting with the network express their needs for network services in their domain language.
A framework based on emergent communication is proposed for intent profiling.
arXiv Detail & Related papers (2024-02-05T07:02:43Z) - LLM-based policy generation for intent-based management of applications [8.938462415711674]
We propose a pipeline that progressively decomposes intents by generating the required actions using a policy-based abstraction.
This allows us to automate the policy execution by creating a closed control loop for the intent deployment.
We evaluate our proposal with a use-case to fulfill and assure an application service chain of virtual network functions.
arXiv Detail & Related papers (2024-01-22T15:37:04Z) - Understanding and Controlling a Maze-Solving Policy Network [44.19448448073822]
We study a pretrained reinforcement learning policy that solves mazes by navigating to a range of target squares.
We find this network pursues multiple context-dependent goals, and we identify circuits within the network that correspond to one of these goals.
We show that this network contains redundant, distributed, and retargetable goal representations, shedding light on the nature of goal-direction in trained policy networks.
arXiv Detail & Related papers (2023-10-12T05:33:54Z) - Slot Induction via Pre-trained Language Model Probing and Multi-level
Contrastive Learning [62.839109775887025]
Slot Induction (SI) task whose objective is to induce slot boundaries without explicit knowledge of token-level slot annotations.
We propose leveraging Unsupervised Pre-trained Language Model (PLM) Probing and Contrastive Learning mechanism to exploit unsupervised semantic knowledge extracted from PLM.
Our approach is shown to be effective in SI task and capable of bridging the gaps with token-level supervised models on two NLU benchmark datasets.
arXiv Detail & Related papers (2023-08-09T05:08:57Z) - Learning to Generate All Feasible Actions [4.333208181196761]
We introduce action mapping, a novel approach that divides the learning process into two steps: first learn feasibility and subsequently, the objective.
This paper focuses on the feasibility part by learning to generate all feasible actions through self-supervised querying of the feasibility model.
We demonstrate the agent's proficiency in generating actions across disconnected feasible action sets.
arXiv Detail & Related papers (2023-01-26T23:15:51Z) - Discrete Factorial Representations as an Abstraction for Goal
Conditioned Reinforcement Learning [99.38163119531745]
We show that applying a discretizing bottleneck can improve performance in goal-conditioned RL setups.
We experimentally prove the expected return on out-of-distribution goals, while still allowing for specifying goals with expressive structure.
arXiv Detail & Related papers (2022-11-01T03:31:43Z) - Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in
Latent Space [76.46113138484947]
General-purpose robots require diverse repertoires of behaviors to complete challenging tasks in real-world unstructured environments.
To address this issue, goal-conditioned reinforcement learning aims to acquire policies that can reach goals for a wide range of tasks on command.
We propose Planning to Practice, a method that makes it practical to train goal-conditioned policies for long-horizon tasks.
arXiv Detail & Related papers (2022-05-17T06:58:17Z) - Lifelong Unsupervised Domain Adaptive Person Re-identification with
Coordinated Anti-forgetting and Adaptation [127.6168183074427]
We propose a new task, Lifelong Unsupervised Domain Adaptive (LUDA) person ReID.
This is challenging because it requires the model to continuously adapt to unlabeled data of the target environments.
We design an effective scheme for this task, dubbed CLUDA-ReID, where the anti-forgetting is harmoniously coordinated with the adaptation.
arXiv Detail & Related papers (2021-12-13T13:19:45Z) - Automatic Curriculum Learning through Value Disagreement [95.19299356298876]
Continually solving new, unsolved tasks is the key to learning diverse behaviors.
In the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency.
We propose setting up an automatic curriculum for goals that the agent needs to solve.
We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T03:58:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.