LLM As DBA
- URL: http://arxiv.org/abs/2308.05481v2
- Date: Fri, 11 Aug 2023 07:55:19 GMT
- Title: LLM As DBA
- Authors: Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
- Abstract summary: Large language models (LLMs) have shown great potential to understand valuable documents and generate reasonable answers.
This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs.
- Score: 25.92711955279298
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Database administrators (DBAs) play a crucial role in managing, maintaining
and optimizing a database system to ensure data availability, performance, and
reliability. However, it is hard and tedious for DBAs to manage a large number
of database instances (e.g., millions of instances on the cloud databases).
Recently large language models (LLMs) have shown great potential to understand
valuable documents and accordingly generate reasonable answers. Thus, we
propose D-Bot, a LLM-based database administrator that can continuously acquire
database maintenance experience from textual sources, and provide reasonable,
well-founded, in-time diagnosis and optimization advice for target databases.
This paper presents a revolutionary LLM-centric framework for database
maintenance, including (i) database maintenance knowledge detection from
documents and tools, (ii) tree of thought reasoning for root cause analysis,
and (iii) collaborative diagnosis among multiple LLMs. Our preliminary
experimental results that D-Bot can efficiently and effectively diagnose the
root causes and our code is available at
github.com/TsinghuaDatabaseGroup/DB-GPT.
Related papers
- Can Language Models Enable In-Context Database? [3.675766365690372]
Large language models (LLMs) are emerging as few-shot learners capable of handling a variety of tasks.
The lightweight and human readable characteristics of in-context database can potentially make it an alternative for the traditional database.
arXiv Detail & Related papers (2024-11-04T05:25:39Z) - Studying and Benchmarking Large Language Models For Log Level Suggestion [49.176736212364496]
Large Language Models (LLMs) have become a focal point of research across various domains.
This paper investigates the impact of characteristics and learning paradigms on the performance of 12 open-source LLMs in log level suggestion.
arXiv Detail & Related papers (2024-10-11T03:52:17Z) - PTD-SQL: Partitioning and Targeted Drilling with LLMs in Text-to-SQL [54.304872649870575]
Large Language Models (LLMs) have emerged as powerful tools for Text-to-sense tasks.
In this study, we propose that employing query group partitioning allows LLMs to focus on learning the thought processes specific to a single problem type.
arXiv Detail & Related papers (2024-09-21T09:33:14Z) - Is Large Language Model Good at Database Knob Tuning? A Comprehensive Experimental Evaluation [28.753219581544617]
This study harnesses large language models (LLMs) as experienced DBAs for knob-tuning tasks with carefully designed prompts.
We conduct experiments to compare LLM-driven approaches against traditional methods across the subtasks.
Our findings reveal that LLMs not only match or surpass traditional methods but also exhibit notable interpretability.
arXiv Detail & Related papers (2024-08-05T03:26:01Z) - Making LLMs Work for Enterprise Data Tasks [4.233865241818131]
Large language models (LLMs) know little about enterprise database tables in the private data ecosystem.
As LLMs' performance is tied to their training data, a crucial question is how useful they can be in improving enterprise database management and analysis tasks.
arXiv Detail & Related papers (2024-07-22T21:16:59Z) - Relational Database Augmented Large Language Model [59.38841050766026]
Large language models (LLMs) excel in many natural language processing (NLP) tasks.
They can only incorporate new knowledge through training or supervised fine-tuning processes.
This precise, up-to-date, and private information is typically stored in relational databases.
arXiv Detail & Related papers (2024-07-21T06:19:10Z) - Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? [54.667202878390526]
Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases.
We introduce LOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs' performance on in-context retrieval and reasoning.
Our findings reveal LCLMs' surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks.
arXiv Detail & Related papers (2024-06-19T00:28:58Z) - Characterization of Large Language Model Development in the Datacenter [55.9909258342639]
Large Language Models (LLMs) have presented impressive performance across several transformative tasks.
However, it is non-trivial to efficiently utilize large-scale cluster resources to develop LLMs.
We present an in-depth characterization study of a six-month LLM development workload trace collected from our GPU datacenter Acme.
arXiv Detail & Related papers (2024-03-12T13:31:14Z) - D-Bot: Database Diagnosis System using Large Language Models [30.20192093986365]
Database administrators (DBAs) play an important role in managing, maintaining and optimizing database systems.
Recently large language models (LLMs) have shown great potential in various fields.
We propose D-Bot, an LLM-based database diagnosis system that can automatically acquire knowledge from diagnosis documents.
arXiv Detail & Related papers (2023-12-03T16:58:10Z) - A Unified Transferable Model for ML-Enhanced DBMS [53.46830627879208]
We propose a unified model MTMLF that uses a multi-task training procedure to capture the transferable knowledge across tasks and a pretrain finetune procedure to distill the meta knowledge across DBs.
We believe this paradigm is more suitable for cloud DB service, and has the potential to revolutionize the way how ML is used in the future.
arXiv Detail & Related papers (2021-05-06T03:31:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.