Complex Logical Query Answering by Calibrating Knowledge Graph Completion Models
- URL: http://arxiv.org/abs/2410.07165v1
- Date: Mon, 30 Sep 2024 06:51:50 GMT
- Title: Complex Logical Query Answering by Calibrating Knowledge Graph Completion Models
- Authors: Changyi Xiao, Yixin Cao,
- Abstract summary: A complex logical query answering task involves finding answer entities for complex logical queries over incomplete knowledge graphs.
Previous research has explored the use of pre-trained knowledge graph completion (KGC) models, which can predict the missing facts in KGs.
We propose a method for calibrating KGC models, namely CKGC, which enables KGC models to adapt to answering complex logical queries.
- Score: 7.051174443949839
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Complex logical query answering (CLQA) is a challenging task that involves finding answer entities for complex logical queries over incomplete knowledge graphs (KGs). Previous research has explored the use of pre-trained knowledge graph completion (KGC) models, which can predict the missing facts in KGs, to answer complex logical queries. However, KGC models are typically evaluated using ranking evaluation metrics, which may result in values of predictions of KGC models that are not well-calibrated. In this paper, we propose a method for calibrating KGC models, namely CKGC, which enables KGC models to adapt to answering complex logical queries. Notably, CKGC is lightweight and effective. The adaptation function is simple, allowing the model to quickly converge during the adaptation process. The core concept of CKGC is to map the values of predictions of KGC models to the range [0, 1], ensuring that values associated with true facts are close to 1, while values linked to false facts are close to 0. Through experiments on three benchmark datasets, we demonstrate that our proposed calibration method can significantly boost model performance in the CLQA task. Moreover, our approach can enhance the performance of CLQA while preserving the ranking evaluation metrics of KGC models. The code is available at https://github.com/changyi7231/CKGC.
Related papers
- Effective Instruction Parsing Plugin for Complex Logical Query Answering on Knowledge Graphs [51.33342412699939]
Knowledge Graph Query Embedding (KGQE) aims to embed First-Order Logic (FOL) queries in a low-dimensional KG space for complex reasoning over incomplete KGs.
Recent studies integrate various external information (such as entity types and relation context) to better capture the logical semantics of FOL queries.
We propose an effective Query Instruction Parsing (QIPP) that captures latent query patterns from code-like query instructions.
arXiv Detail & Related papers (2024-10-27T03:18:52Z) - Is Complex Query Answering Really Complex? [28.8459899849641]
We show that the current benchmarks for CQA are not really complex, and the way they are built distorts our perception of progress in this field.
We propose a set of more challenging benchmarks, composed of queries that require models to reason over multiple hops and better reflect the construction of real-world KGs.
arXiv Detail & Related papers (2024-10-16T13:19:03Z) - Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity [59.57065228857247]
Retrieval-augmented Large Language Models (LLMs) have emerged as a promising approach to enhancing response accuracy in several tasks, such as Question-Answering (QA)
We propose a novel adaptive QA framework, that can dynamically select the most suitable strategy for (retrieval-augmented) LLMs based on the query complexity.
We validate our model on a set of open-domain QA datasets, covering multiple query complexities, and show that ours enhances the overall efficiency and accuracy of QA systems.
arXiv Detail & Related papers (2024-03-21T13:52:30Z) - Separate-and-Aggregate: A Transformer-based Patch Refinement Model for
Knowledge Graph Completion [28.79628925695775]
We propose a novel Transformer-based Patch Refinement Model (PatReFormer) for Knowledge Graph completion.
We conduct experiments on four popular KGC benchmarks, WN18RR, FB15k-237, YAGO37 and DB100K.
The experimental results show significant performance improvement from existing KGC methods on standard KGC evaluation metrics.
arXiv Detail & Related papers (2023-07-11T06:27:13Z) - ProTeCt: Prompt Tuning for Taxonomic Open Set Classification [59.59442518849203]
Few-shot adaptation methods do not fare well in the taxonomic open set (TOS) setting.
We propose a prompt tuning technique that calibrates the hierarchical consistency of model predictions.
A new Prompt Tuning for Hierarchical Consistency (ProTeCt) technique is then proposed to calibrate classification across label set granularities.
arXiv Detail & Related papers (2023-06-04T02:55:25Z) - Cardinality Estimation over Knowledge Graphs with Embeddings and Graph Neural Networks [0.552480439325792]
Cardinality Estimation over Knowledge Graphs (KG) is crucial for query optimization.
We propose GNCE, a novel approach that leverages knowledge graph embeddings and Graph Neural Networks (GNN) to accurately predict the cardinality of conjunctive queries.
arXiv Detail & Related papers (2023-03-02T10:39:13Z) - KGxBoard: Explainable and Interactive Leaderboard for Evaluation of
Knowledge Graph Completion Models [76.01814380927507]
KGxBoard is an interactive framework for performing fine-grained evaluation on meaningful subsets of the data.
In our experiments, we highlight the findings with the use of KGxBoard, which would have been impossible to detect with standard averaged single-score metrics.
arXiv Detail & Related papers (2022-08-23T15:11:45Z) - GreenKGC: A Lightweight Knowledge Graph Completion Method [32.528770408502396]
GreenKGC aims to discover missing relationships between entities in knowledge graphs.
It consists of three modules: representation learning, feature pruning, and decision learning.
In low dimensions, GreenKGC can outperform SOTA methods in most datasets.
arXiv Detail & Related papers (2022-08-19T03:33:45Z) - Knowledge Base Question Answering by Case-based Reasoning over Subgraphs [81.22050011503933]
We show that our model answers queries requiring complex reasoning patterns more effectively than existing KG completion algorithms.
The proposed model outperforms or performs competitively with state-of-the-art models on several KBQA benchmarks.
arXiv Detail & Related papers (2022-02-22T01:34:35Z) - Rethinking Graph Convolutional Networks in Knowledge Graph Completion [83.25075514036183]
Graph convolutional networks (GCNs) have been increasingly popular in knowledge graph completion (KGC)
In this paper, we build upon representative GCN-based KGC models and introduce variants to find which factor of GCNs is critical in KGC.
We propose a simple yet effective framework named LTE-KGE, which equips existing KGE models with linearly transformed entity embeddings.
arXiv Detail & Related papers (2022-02-08T11:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.