Complex Logical Query Answering by Calibrating Knowledge Graph Completion Models
- URL: http://arxiv.org/abs/2410.07165v1
- Date: Mon, 30 Sep 2024 06:51:50 GMT
- Title: Complex Logical Query Answering by Calibrating Knowledge Graph Completion Models
- Authors: Changyi Xiao, Yixin Cao,
- Abstract summary: A complex logical query answering task involves finding answer entities for complex logical queries over incomplete knowledge graphs.
Previous research has explored the use of pre-trained knowledge graph completion (KGC) models, which can predict the missing facts in KGs.
We propose a method for calibrating KGC models, namely CKGC, which enables KGC models to adapt to answering complex logical queries.
- Score: 7.051174443949839
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Complex logical query answering (CLQA) is a challenging task that involves finding answer entities for complex logical queries over incomplete knowledge graphs (KGs). Previous research has explored the use of pre-trained knowledge graph completion (KGC) models, which can predict the missing facts in KGs, to answer complex logical queries. However, KGC models are typically evaluated using ranking evaluation metrics, which may result in values of predictions of KGC models that are not well-calibrated. In this paper, we propose a method for calibrating KGC models, namely CKGC, which enables KGC models to adapt to answering complex logical queries. Notably, CKGC is lightweight and effective. The adaptation function is simple, allowing the model to quickly converge during the adaptation process. The core concept of CKGC is to map the values of predictions of KGC models to the range [0, 1], ensuring that values associated with true facts are close to 1, while values linked to false facts are close to 0. Through experiments on three benchmark datasets, we demonstrate that our proposed calibration method can significantly boost model performance in the CLQA task. Moreover, our approach can enhance the performance of CLQA while preserving the ranking evaluation metrics of KGC models. The code is available at https://github.com/changyi7231/CKGC.
Related papers
- Chain-of-Retrieval Augmented Generation [72.06205327186069]
This paper introduces an approach for training o1-like RAG models that retrieve and reason over relevant information step by step before generating the final answer.
Our proposed method, CoRAG, allows the model to dynamically reformulate the query based on the evolving state.
arXiv Detail & Related papers (2025-01-24T09:12:52Z) - KG-CF: Knowledge Graph Completion with Context Filtering under the Guidance of Large Language Models [55.39134076436266]
KG-CF is a framework tailored for ranking-based knowledge graph completion tasks.
KG-CF leverages LLMs' reasoning abilities to filter out irrelevant contexts, achieving superior results on real-world datasets.
arXiv Detail & Related papers (2025-01-06T01:52:15Z) - Is Complex Query Answering Really Complex? [28.8459899849641]
We show that the current benchmarks for CQA might not be as complex as we think.
We propose a set of more challenging benchmarks composed of queries that require models to reason over multiple hops.
arXiv Detail & Related papers (2024-10-16T13:19:03Z) - Cardinality Estimation on Hyper-relational Knowledge Graphs [19.30637362876516]
Cardinality Estimation (CE) for queries over knowlege graph (KGs) with triple facts has achieved great success.
However, existing CE methods, such as sampling and summary methods over KGs, perform unsatisfactorily on HKGs.
We propose a novel qualifier-aware graph neural network (GNN) model that effectively incorporates qualifier information and adaptively combines outputs from multiple GNN layers.
arXiv Detail & Related papers (2024-05-24T05:44:43Z) - Separate-and-Aggregate: A Transformer-based Patch Refinement Model for
Knowledge Graph Completion [28.79628925695775]
We propose a novel Transformer-based Patch Refinement Model (PatReFormer) for Knowledge Graph completion.
We conduct experiments on four popular KGC benchmarks, WN18RR, FB15k-237, YAGO37 and DB100K.
The experimental results show significant performance improvement from existing KGC methods on standard KGC evaluation metrics.
arXiv Detail & Related papers (2023-07-11T06:27:13Z) - ProTeCt: Prompt Tuning for Taxonomic Open Set Classification [59.59442518849203]
Few-shot adaptation methods do not fare well in the taxonomic open set (TOS) setting.
We propose a prompt tuning technique that calibrates the hierarchical consistency of model predictions.
A new Prompt Tuning for Hierarchical Consistency (ProTeCt) technique is then proposed to calibrate classification across label set granularities.
arXiv Detail & Related papers (2023-06-04T02:55:25Z) - Cardinality Estimation over Knowledge Graphs with Embeddings and Graph Neural Networks [0.552480439325792]
Cardinality Estimation over Knowledge Graphs (KG) is crucial for query optimization.
We propose GNCE, a novel approach that leverages knowledge graph embeddings and Graph Neural Networks (GNN) to accurately predict the cardinality of conjunctive queries.
arXiv Detail & Related papers (2023-03-02T10:39:13Z) - KGxBoard: Explainable and Interactive Leaderboard for Evaluation of
Knowledge Graph Completion Models [76.01814380927507]
KGxBoard is an interactive framework for performing fine-grained evaluation on meaningful subsets of the data.
In our experiments, we highlight the findings with the use of KGxBoard, which would have been impossible to detect with standard averaged single-score metrics.
arXiv Detail & Related papers (2022-08-23T15:11:45Z) - GreenKGC: A Lightweight Knowledge Graph Completion Method [32.528770408502396]
GreenKGC aims to discover missing relationships between entities in knowledge graphs.
It consists of three modules: representation learning, feature pruning, and decision learning.
In low dimensions, GreenKGC can outperform SOTA methods in most datasets.
arXiv Detail & Related papers (2022-08-19T03:33:45Z) - Knowledge Base Question Answering by Case-based Reasoning over Subgraphs [81.22050011503933]
We show that our model answers queries requiring complex reasoning patterns more effectively than existing KG completion algorithms.
The proposed model outperforms or performs competitively with state-of-the-art models on several KBQA benchmarks.
arXiv Detail & Related papers (2022-02-22T01:34:35Z) - Rethinking Graph Convolutional Networks in Knowledge Graph Completion [83.25075514036183]
Graph convolutional networks (GCNs) have been increasingly popular in knowledge graph completion (KGC)
In this paper, we build upon representative GCN-based KGC models and introduce variants to find which factor of GCNs is critical in KGC.
We propose a simple yet effective framework named LTE-KGE, which equips existing KGE models with linearly transformed entity embeddings.
arXiv Detail & Related papers (2022-02-08T11:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.