A Unified and Efficient Coordinating Framework for Autonomous DBMS
Tuning
- URL: http://arxiv.org/abs/2303.05710v1
- Date: Fri, 10 Mar 2023 05:27:23 GMT
- Title: A Unified and Efficient Coordinating Framework for Autonomous DBMS
Tuning
- Authors: Xinyi Zhang, Zhuo Chang, Hong Wu, Yang Li, Jia Chen, Jian Tan, Feifei
Li, Bin Cui
- Abstract summary: We propose a unified coordinating framework to efficiently utilize existing ML-based agents.
We show that it can effectively utilize different ML-based agents and find better configurations with 1.414.1X speedups on the workload execution time.
- Score: 34.85351481228439
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently using machine learning (ML) based techniques to optimize modern
database management systems has attracted intensive interest from both industry
and academia. With an objective to tune a specific component of a DBMS (e.g.,
index selection, knobs tuning), the ML-based tuning agents have shown to be
able to find better configurations than experienced database administrators.
However, one critical yet challenging question remains unexplored -- how to
make those ML-based tuning agents work collaboratively. Existing methods do not
consider the dependencies among the multiple agents, and the model used by each
agent only studies the effect of changing the configurations in a single
component. To tune different components for DBMS, a coordinating mechanism is
needed to make the multiple agents cognizant of each other. Also, we need to
decide how to allocate the limited tuning budget among the agents to maximize
the performance. Such a decision is difficult to make since the distribution of
the reward for each agent is unknown and non-stationary. In this paper, we
study the above question and present a unified coordinating framework to
efficiently utilize existing ML-based agents. First, we propose a message
propagation protocol that specifies the collaboration behaviors for agents and
encapsulates the global tuning messages in each agent's model. Second, we
combine Thompson Sampling, a well-studied reinforcement learning algorithm with
a memory buffer so that our framework can allocate budget judiciously in a
non-stationary environment. Our framework defines the interfaces adapted to a
broad class of ML-based tuning agents, yet simple enough for integration with
existing implementations and future extensions. We show that it can effectively
utilize different ML-based agents and find better configurations with 1.4~14.1X
speedups on the workload execution time compared with baselines.
Related papers
- AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML [56.565200973244146]
Automated machine learning (AutoML) accelerates AI development by automating tasks in the development pipeline.
Recent works have started exploiting large language models (LLM) to lessen such burden.
This paper proposes AutoML-Agent, a novel multi-agent framework tailored for full-pipeline AutoML.
arXiv Detail & Related papers (2024-10-03T20:01:09Z) - Learning to Use Tools via Cooperative and Interactive Agents [58.77710337157665]
Tool learning empowers large language models (LLMs) as agents to use external tools and extend their utility.
We propose ConAgents, a Cooperative and interactive Agents framework, which coordinates three specialized agents for tool selection, tool execution, and action calibration separately.
Our experiments on three datasets show that the LLMs, when equipped with ConAgents, outperform baselines with substantial improvement.
arXiv Detail & Related papers (2024-03-05T15:08:16Z) - Towards Robust Multi-Modal Reasoning via Model Selection [7.6621866737827045]
LLM serves as the "brain" of the agent, orchestrating multiple tools for collaborative multi-step task solving.
We propose the $textitM3$ framework as a plug-in with negligible runtime overhead at test-time.
Our experiments reveal that our framework enables dynamic model selection, considering both user inputs and subtask dependencies.
arXiv Detail & Related papers (2023-10-12T16:06:18Z) - Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with
Agent Team Optimization [59.39113350538332]
Large language model (LLM) agents have been shown effective on a wide range of tasks, and by ensembling multiple LLM agents, their performances could be further improved.
Existing approaches employ a fixed set of agents to interact with each other in a static architecture.
We build a framework named Dynamic LLM-Agent Network ($textbfDyLAN$) for LLM-agent collaboration on complicated tasks like reasoning and code generation.
arXiv Detail & Related papers (2023-10-03T16:05:48Z) - AutoAgents: A Framework for Automatic Agent Generation [27.74332323317923]
AutoAgents is an innovative framework that adaptively generates and coordinates multiple specialized agents to build an AI team according to different tasks.
Our experiments on various benchmarks demonstrate that AutoAgents generates more coherent and accurate solutions than the existing multi-agent methods.
arXiv Detail & Related papers (2023-09-29T14:46:30Z) - Recommender AI Agent: Integrating Large Language Models for Interactive
Recommendations [53.76682562935373]
We introduce an efficient framework called textbfInteRecAgent, which employs LLMs as the brain and recommender models as tools.
InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs.
arXiv Detail & Related papers (2023-08-31T07:36:44Z) - Multi-Agent Reinforcement Learning for Microprocessor Design Space
Exploration [71.95914457415624]
Microprocessor architects are increasingly resorting to domain-specific customization in the quest for high-performance and energy-efficiency.
We propose an alternative formulation that leverages Multi-Agent RL (MARL) to tackle this problem.
Our evaluation shows that the MARL formulation consistently outperforms single-agent RL baselines.
arXiv Detail & Related papers (2022-11-29T17:10:24Z) - Multi-agent Deep Covering Skill Discovery [50.812414209206054]
We propose Multi-agent Deep Covering Option Discovery, which constructs the multi-agent options through minimizing the expected cover time of the multiple agents' joint state space.
Also, we propose a novel framework to adopt the multi-agent options in the MARL process.
We show that the proposed algorithm can effectively capture the agent interactions with the attention mechanism, successfully identify multi-agent options, and significantly outperforms prior works using single-agent options or no options.
arXiv Detail & Related papers (2022-10-07T00:40:59Z) - Multi-agent Databases via Independent Learning [11.05491559831151]
We introduce MADB (Multi-Agent DB), a proof-of-concept system that incorporates a learned query scheduler and a learned query.
Preliminary results demonstrate that MADB can outperform the non-cooperative integration of learned components.
arXiv Detail & Related papers (2022-05-28T03:47:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.