A Framework for dynamically meeting performance objectives on a service
mesh
- URL: http://arxiv.org/abs/2306.14178v1
- Date: Sun, 25 Jun 2023 09:08:41 GMT
- Title: A Framework for dynamically meeting performance objectives on a service
mesh
- Authors: Forough Shahab Samani and Rolf Stadler
- Abstract summary: We present a framework for achieving end-to-end management objectives for multiple services that concurrently execute on a service mesh.
We apply reinforcement learning techniques to train an agent that periodically performs control actions to real resources.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present a framework for achieving end-to-end management objectives for
multiple services that concurrently execute on a service mesh. We apply
reinforcement learning (RL) techniques to train an agent that periodically
performs control actions to reallocate resources. We develop and evaluate the
framework using a laboratory testbed where we run information and computing
services on a service mesh, supported by the Istio and Kubernetes platforms. We
investigate different management objectives that include end-to-end delay
bounds on service requests, throughput objectives, cost-related objectives, and
service differentiation. We compute the control policies on a simulator rather
than on the testbed, which speeds up the training time by orders of magnitude
for the scenarios we study. Our proposed framework is novel in that it
advocates a top-down approach whereby the management objectives are defined
first and then mapped onto the available control actions. It allows us to
execute several types of control actions simultaneously. By first learning the
system model and the operating region from testbed traces, we can train the
agent for different management objectives in parallel.
Related papers
- MultiAgentBench: Evaluating the Collaboration and Competition of LLM agents [59.825725526176655]
Large Language Models (LLMs) have shown remarkable capabilities as autonomous agents.
Existing benchmarks either focus on single-agent tasks or are confined to narrow domains, failing to capture the dynamics of multi-agent coordination and competition.
We introduce MultiAgentBench, a benchmark designed to evaluate LLM-based multi-agent systems across diverse, interactive scenarios.
arXiv Detail & Related papers (2025-03-03T05:18:50Z) - Application of Deep Reinforcement Learning to UAV Swarming for Ground Surveillance [0.0]
It proposes a hybrid AI system, integrating deep reinforcement learning in a multi-agent centralized swarm architecture.
The proposed system is tailored to perform surveillance of a specific area, searching and tracking ground targets, for security and law enforcement applications.
arXiv Detail & Related papers (2025-01-15T08:46:20Z) - A Survey of Controllable Learning: Methods and Applications in Information Retrieval [5.641298338700653]
We provide a formal definition of controllable learning (CL), and discuss its applications in information retrieval (IR)
We identify challenges faced by CL across training, evaluation, task setting, and deployment in online environments.
We outline promising directions for CL in theoretical analysis, efficient computation, empowering large language models, application scenarios and evaluation frameworks.
arXiv Detail & Related papers (2024-07-04T09:50:50Z) - A Dynamic LLM-Powered Agent Network for Task-Oriented Agent Collaboration [55.35849138235116]
We propose automatically selecting a team of agents from candidates to collaborate in a dynamic communication structure toward different tasks and domains.
Specifically, we build a framework named Dynamic LLM-Powered Agent Network ($textDyLAN$) for LLM-powered agent collaboration.
We demonstrate that DyLAN outperforms strong baselines in code generation, decision-making, general reasoning, and arithmetic reasoning tasks with moderate computational cost.
arXiv Detail & Related papers (2023-10-03T16:05:48Z) - Discrete Factorial Representations as an Abstraction for Goal
Conditioned Reinforcement Learning [99.38163119531745]
We show that applying a discretizing bottleneck can improve performance in goal-conditioned RL setups.
We experimentally prove the expected return on out-of-distribution goals, while still allowing for specifying goals with expressive structure.
arXiv Detail & Related papers (2022-11-01T03:31:43Z) - Dynamically meeting performance objectives for multiple services on a
service mesh [0.0]
We present a framework that lets a service provider achieve end-to-end management objectives under varying load.
We investigate different management objectives that include end-to-end delay bounds on service requests, throughput objectives, and service differentiation.
We compute the control policies not on the testbed, but in a simulator, which speeds up the learning process by orders of magnitude.
arXiv Detail & Related papers (2022-10-08T11:54:25Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z) - C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks [133.40619754674066]
Goal-conditioned reinforcement learning can solve tasks in a wide range of domains, including navigation and manipulation.
We propose the distant goal-reaching task by using search at training time to automatically generate intermediate states.
E-step corresponds to planning an optimal sequence of waypoints using graph search, while the M-step aims to learn a goal-conditioned policy to reach those waypoints.
arXiv Detail & Related papers (2021-10-22T22:05:31Z) - Exploring Relational Context for Multi-Task Dense Prediction [76.86090370115]
We consider a multi-task environment for dense prediction tasks, represented by a common backbone and independent task-specific heads.
We explore various attention-based contexts, such as global and local, in the multi-task setting.
We propose an Adaptive Task-Relational Context module, which samples the pool of all available contexts for each task pair.
arXiv Detail & Related papers (2021-04-28T16:45:56Z) - Scalable Reinforcement Learning Policies for Multi-Agent Control [29.42370205354368]
We develop a Multi-Agent Reinforcement Learning (MARL) method to learn scalable control policies for target tracking.
We show results for tasks consisting of up to 1000 pursuers tracking 1000 targets.
arXiv Detail & Related papers (2020-11-16T16:11:12Z) - Automatic Curriculum Learning through Value Disagreement [95.19299356298876]
Continually solving new, unsolved tasks is the key to learning diverse behaviors.
In the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency.
We propose setting up an automatic curriculum for goals that the agent needs to solve.
We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T03:58:25Z) - PlanGAN: Model-based Planning With Sparse Rewards and Multiple Goals [14.315501760755609]
PlanGAN is a model-based algorithm for solving multi-goal tasks in environments with sparse rewards.
Our studies indicate that PlanGAN can achieve comparable performance whilst being around 4-8 times more sample efficient.
arXiv Detail & Related papers (2020-06-01T12:53:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.