A Novel Technique for Query Plan Representation Based on Graph Neural Nets
- URL: http://arxiv.org/abs/2405.04814v2
- Date: Wed, 5 Jun 2024 07:27:20 GMT
- Title: A Novel Technique for Query Plan Representation Based on Graph Neural Nets
- Authors: Baoming Chang, Amin Kamali, Verena Kantere,
- Abstract summary: We study the effect of using different state-of-the-art tree models on the aggregated's cost estimation and plan selection performance.
We propose a novel tree model BiGG employing GNN by Gated recurrent units (GRUs) and demonstrate experimentally that BiGG provides significant improvements to cost estimation tasks.
- Score: 2.184775414778289
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning representations for query plans play a pivotal role in machine learning-based query optimizers of database management systems. To this end, particular model architectures are proposed in the literature to transform the tree-structured query plans into representations with formats learnable by downstream machine learning models. However, existing research rarely compares and analyzes the query plan representation capabilities of these tree models and their direct impact on the performance of the overall optimizer. To address this problem, we perform a comparative study to explore the effect of using different state-of-the-art tree models on the optimizer's cost estimation and plan selection performance in relatively complex workloads. Additionally, we explore the possibility of using graph neural networks (GNNs) in the query plan representation task. We propose a novel tree model BiGG employing Bidirectional GNN aggregated by Gated recurrent units (GRUs) and demonstrate experimentally that BiGG provides significant improvements to cost estimation tasks and relatively excellent plan selection performance compared to the state-of-the-art tree models.
Related papers
- Beyond Model Base Selection: Weaving Knowledge to Master Fine-grained Neural Network Design [20.31388126105889]
We propose M-DESIGN, a curated model knowledge base (MKB) pipeline for mastering neural network refinement.<n>First, we propose a knowledge weaving engine that reframes model refinement as an adaptive query problem over task metadata.<n>Given a user's task query, M-DESIGN quickly matches and iteratively refines candidate models by leveraging a graph-relational knowledge schema.
arXiv Detail & Related papers (2025-07-21T07:49:19Z) - Learning Decision Trees as Amortized Structure Inference [59.65621207449269]
We propose a hybrid amortized structure inference approach to learn predictive decision tree ensembles given data.
We show that our approach, DT-GFN, outperforms state-of-the-art decision tree and deep learning methods on standard classification benchmarks.
arXiv Detail & Related papers (2025-03-10T07:05:07Z) - Reqo: A Robust and Explainable Query Optimization Cost Model [2.184775414778289]
We propose a tree model architecture based on Bidirectional Graph Neural Networks (Bi-GNN) aggregated by Gated Recurrent Units (GRUs)
We implement a novel learning-to-rank cost model that effectively quantifies the uncertainty in cost estimates using approximate probabilistic ML.
In addition, we propose the first explainability technique specifically designed for learning-based cost models.
arXiv Detail & Related papers (2025-01-29T04:48:51Z) - Advanced RAG Models with Graph Structures: Optimizing Complex Knowledge Reasoning and Text Generation [7.3491970177535]
This study proposes a scheme to process graph structure data by combining graph neural network (GNN)
The results show that the graph-based RAG model proposed in this paper is superior to the traditional generation model in terms of quality, knowledge consistency, and reasoning ability.
arXiv Detail & Related papers (2024-11-06T00:23:55Z) - Escaping the Forest: Sparse Interpretable Neural Networks for Tabular Data [0.0]
We show that our models, Sparse TABular NET or sTAB-Net with attention mechanisms, are more effective than tree-based models.
They achieve better performance than post-hoc methods like SHAP.
arXiv Detail & Related papers (2024-10-23T10:50:07Z) - Exploring the design space of deep-learning-based weather forecasting systems [56.129148006412855]
This paper systematically analyzes the impact of different design choices on deep-learning-based weather forecasting systems.
We study fixed-grid architectures such as UNet, fully convolutional architectures, and transformer-based models.
We propose a hybrid system that combines the strong performance of fixed-grid models with the flexibility of grid-invariant architectures.
arXiv Detail & Related papers (2024-10-09T22:25:50Z) - Graph neural network surrogate for strategic transport planning [2.175217022338634]
This paper explores the application of advanced Graph Neural Network (GNN) architectures as surrogate models for strategic transport planning.
Building upon a prior work that laid the foundation with graph convolution networks (GCN), our study delves into the comparative analysis of established GCN with the more expressive Graph Attention Network (GAT)
We propose a novel GAT variant (namely GATv3) to address over-smoothing issues in graph-based models.
arXiv Detail & Related papers (2024-08-14T14:18:47Z) - Towards Lightweight Graph Neural Network Search with Curriculum Graph Sparsification [48.334100429553644]
This paper proposes to design a joint graph data and architecture mechanism, which identifies important sub-architectures via the valuable graph data.
To search for optimal lightweight Graph Neural Networks (GNNs), we propose a Lightweight Graph Neural Architecture Search with Graph SparsIfication and Network Pruning (GASSIP) method.
Our method achieves on-par or even higher node classification performance with half or fewer model parameters of searched GNNs and a sparser graph.
arXiv Detail & Related papers (2024-06-24T06:53:37Z) - Learning Topological Representations with Bidirectional Graph Attention Network for Solving Job Shop Scheduling Problem [27.904195034688257]
Existing learning-based methods for solving job shop scheduling problems (JSSP) usually use off-the-shelf GNN models tailored to undirected graphs and neglect the rich and meaningful topological structures of disjunctive graphs (DGs)
This paper proposes the topology-aware bidirectional graph attention network (TBGAT) to embed the DG for solving JSSP in a local search framework.
arXiv Detail & Related papers (2024-02-27T15:33:20Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Graph Neural Bandits [49.85090929163639]
We propose a framework named Graph Neural Bandits (GNB) to leverage the collaborative nature among users empowered by graph neural networks (GNNs)
To refine the recommendation strategy, we utilize separate GNN-based models on estimated user graphs for exploitation and adaptive exploration.
arXiv Detail & Related papers (2023-08-21T15:57:57Z) - Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - Visual Learning-based Planning for Continuous High-Dimensional POMDPs [81.16442127503517]
Visual Tree Search (VTS) is a learning and planning procedure that combines generative models learned offline with online model-based POMDP planning.
VTS bridges offline model training and online planning by utilizing a set of deep generative observation models to predict and evaluate the likelihood of image observations in a Monte Carlo tree search planner.
We show that VTS is robust to different observation noises and, since it utilizes online, model-based planning, can adapt to different reward structures without the need to re-train.
arXiv Detail & Related papers (2021-12-17T11:53:31Z) - An Investigation Between Schema Linking and Text-to-SQL Performance [21.524953580249395]
Recent neural approaches deliver excellent performance; however, models that are difficult to interpret inhibit future developments.
This study aims to provide a better approach toward the interpretation of neural models.
arXiv Detail & Related papers (2021-02-03T02:50:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.