Answering Complex Logical Queries on Knowledge Graphs via Query
Computation Tree Optimization
- URL: http://arxiv.org/abs/2212.09567v3
- Date: Wed, 7 Jun 2023 06:57:09 GMT
- Title: Answering Complex Logical Queries on Knowledge Graphs via Query
Computation Tree Optimization
- Authors: Yushi Bai, Xin Lv, Juanzi Li, Lei Hou
- Abstract summary: We propose QTO (Query Computation Tree Optimization) to efficiently find the exact optimal solution.
QTO finds the optimal solution by a forward-backward propagation on the tree-like graph computation, i.e., query computation tree.
Experiments show that QTO obtains state-of-the-art performance on complex query answering, outperforming previous best results by an average of 22%.
- Score: 24.53940704113805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Answering complex logical queries on incomplete knowledge graphs is a
challenging task, and has been widely studied. Embedding-based methods require
training on complex queries, and cannot generalize well to out-of-distribution
query structures. Recent work frames this task as an end-to-end optimization
problem, and it only requires a pretrained link predictor. However, due to the
exponentially large combinatorial search space, the optimal solution can only
be approximated, limiting the final accuracy. In this work, we propose QTO
(Query Computation Tree Optimization) that can efficiently find the exact
optimal solution. QTO finds the optimal solution by a forward-backward
propagation on the tree-like computation graph, i.e., query computation tree.
In particular, QTO utilizes the independence encoded in the query computation
tree to reduce the search space, where only local computations are involved
during the optimization procedure. Experiments on 3 datasets show that QTO
obtains state-of-the-art performance on complex query answering, outperforming
previous best results by an average of 22%. Moreover, QTO can interpret the
intermediate solutions for each of the one-hop atoms in the query with over 90%
accuracy. The code of our paper is at https://github.com/bys0318/QTO.
Related papers
- Quality-Diversity Algorithms Can Provably Be Helpful for Optimization [24.694984679399315]
Quality-Diversity (QD) algorithms aim to find a set of high-performing, yet diverse solutions.
This paper tries to shed some light on the optimization ability of QD algorithms via rigorous running time analysis.
arXiv Detail & Related papers (2024-01-19T07:40:24Z) - JoinGym: An Efficient Query Optimization Environment for Reinforcement
Learning [58.71541261221863]
Join order selection (JOS) is the problem of ordering join operations to minimize total query execution cost.
We present JoinGym, a query optimization environment for bushy reinforcement learning (RL)
Under the hood, JoinGym simulates a query plan's cost by looking up intermediate result cardinalities from a pre-computed dataset.
arXiv Detail & Related papers (2023-07-21T17:00:06Z) - Adapting Neural Link Predictors for Data-Efficient Complex Query
Answering [45.961111441411084]
We propose a parameter-efficient score emphadaptation model optimised to re-calibrate neural link prediction scores for the complex query answering task.
CQD$mathcalA$ produces significantly more accurate results than current state-of-the-art methods.
arXiv Detail & Related papers (2023-01-29T00:17:16Z) - Logical Message Passing Networks with One-hop Inference on Atomic
Formulas [57.47174363091452]
We propose a framework for complex query answering that decomposes the Knowledge Graph embeddings from neural set operators.
On top of the query graph, we propose the Logical Message Passing Neural Network (LMPNN) that connects the local one-hop inferences on atomic formulas to the global logical reasoning.
Our approach yields the new state-of-the-art neural CQA model.
arXiv Detail & Related papers (2023-01-21T02:34:06Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Complex Query Answering with Neural Link Predictors [13.872400132315988]
We propose a framework for efficiently answering complex queries on incomplete Knowledge Graphs.
We translate each query into an end-to-end differentiable objective, where the truth value of each atom is computed by a pre-trained neural predictor.
In our experiments, the proposed approach produces more accurate results than state-of-the-art methods.
arXiv Detail & Related papers (2020-11-06T16:20:49Z) - Generalized and Scalable Optimal Sparse Decision Trees [56.35541305670828]
We present techniques that produce optimal decision trees over a variety of objectives.
We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables.
arXiv Detail & Related papers (2020-06-15T19:00:11Z) - Gumbel-softmax-based Optimization: A Simple General Framework for
Optimization Problems on Graphs [5.486093983007419]
We propose a simple, fast, and general algorithm framework based on advanced automatic differentiation technique empowered by deep learning frameworks.
High-quality solutions can be obtained with much less time consuming compared to traditional approaches.
arXiv Detail & Related papers (2020-04-14T14:11:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.