Adapting Neural Link Predictors for Data-Efficient Complex Query
Answering
- URL: http://arxiv.org/abs/2301.12313v3
- Date: Tue, 11 Jul 2023 15:48:48 GMT
- Title: Adapting Neural Link Predictors for Data-Efficient Complex Query
Answering
- Authors: Erik Arakelyan, Pasquale Minervini, Daniel Daza, Michael Cochez,
Isabelle Augenstein
- Abstract summary: We propose a parameter-efficient score emphadaptation model optimised to re-calibrate neural link prediction scores for the complex query answering task.
CQD$mathcalA$ produces significantly more accurate results than current state-of-the-art methods.
- Score: 45.961111441411084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Answering complex queries on incomplete knowledge graphs is a challenging
task where a model needs to answer complex logical queries in the presence of
missing knowledge. Prior work in the literature has proposed to address this
problem by designing architectures trained end-to-end for the complex query
answering task with a reasoning process that is hard to interpret while
requiring data and resource-intensive training. Other lines of research have
proposed re-using simple neural link predictors to answer complex queries,
reducing the amount of training data by orders of magnitude while providing
interpretable answers. The neural link predictor used in such approaches is not
explicitly optimised for the complex query answering task, implying that its
scores are not calibrated to interact together. We propose to address these
problems via CQD$^{\mathcal{A}}$, a parameter-efficient score \emph{adaptation}
model optimised to re-calibrate neural link prediction scores for the complex
query answering task. While the neural link predictor is frozen, the adaptation
component -- which only increases the number of model parameters by $0.03\%$ --
is trained on the downstream complex query answering task. Furthermore, the
calibration component enables us to support reasoning over queries that include
atomic negations, which was previously impossible with link predictors. In our
experiments, CQD$^{\mathcal{A}}$ produces significantly more accurate results
than current state-of-the-art methods, improving from $34.4$ to $35.1$ Mean
Reciprocal Rank values averaged across all datasets and query types while using
$\leq 30\%$ of the available training query types. We further show that
CQD$^{\mathcal{A}}$ is data-efficient, achieving competitive results with only
$1\%$ of the training complex queries, and robust in out-of-domain evaluations.
Related papers
- Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity [59.57065228857247]
Retrieval-augmented Large Language Models (LLMs) have emerged as a promising approach to enhancing response accuracy in several tasks, such as Question-Answering (QA)
We propose a novel adaptive QA framework, that can dynamically select the most suitable strategy for (retrieval-augmented) LLMs based on the query complexity.
We validate our model on a set of open-domain QA datasets, covering multiple query complexities, and show that ours enhances the overall efficiency and accuracy of QA systems.
arXiv Detail & Related papers (2024-03-21T13:52:30Z) - Meta Operator for Complex Query Answering on Knowledge Graphs [58.340159346749964]
We argue that different logical operator types, rather than the different complex query types, are the key to improving generalizability.
We propose a meta-learning algorithm to learn the meta-operators with limited data and adapt them to different instances of operators under various complex queries.
Empirical results show that learning meta-operators is more effective than learning original CQA or meta-CQA models.
arXiv Detail & Related papers (2024-03-15T08:54:25Z) - Type-based Neural Link Prediction Adapter for Complex Query Answering [2.1098688291287475]
We propose TypE-based Neural Link Prediction Adapter (TENLPA), a novel model that constructs type-based entity-relation graphs.
In order to effectively combine type information with complex logical queries, an adaptive learning mechanism is introduced.
Experiments on 3 standard datasets show that TENLPA model achieves state-of-the-art performance on complex query answering.
arXiv Detail & Related papers (2024-01-29T10:54:28Z) - Query2Triple: Unified Query Encoding for Answering Diverse Complex
Queries over Knowledge Graphs [29.863085746761556]
We propose Query to Triple (Q2T), a novel approach that decouples the training for simple and complex queries.
Our proposed Q2T is not only efficient to train, but also modular, thus easily adaptable to various neural link predictors.
arXiv Detail & Related papers (2023-10-17T13:13:30Z) - Rethinking Complex Queries on Knowledge Graphs with Neural Link Predictors [58.340159346749964]
We propose a new neural-symbolic method to support end-to-end learning using complex queries with provable reasoning capability.
We develop a new dataset containing ten new types of queries with features that have never been considered.
Our method outperforms previous methods significantly in the new dataset and also surpasses previous methods in the existing dataset at the same time.
arXiv Detail & Related papers (2023-04-14T11:35:35Z) - Toward Unsupervised Realistic Visual Question Answering [70.67698100148414]
We study the problem of realistic VQA (RVQA), where a model has to reject unanswerable questions (UQs) and answer answerable ones (AQs)
We first point out 2 drawbacks in current RVQA research, where (1) datasets contain too many unchallenging UQs and (2) a large number of annotated UQs are required for training.
We propose a new testing dataset, RGQA, which combines AQs from an existing VQA dataset with around 29K human-annotated UQs.
This combines pseudo UQs obtained by randomly pairing images and questions, with an
arXiv Detail & Related papers (2023-03-09T06:58:29Z) - Logical Message Passing Networks with One-hop Inference on Atomic
Formulas [57.47174363091452]
We propose a framework for complex query answering that decomposes the Knowledge Graph embeddings from neural set operators.
On top of the query graph, we propose the Logical Message Passing Neural Network (LMPNN) that connects the local one-hop inferences on atomic formulas to the global logical reasoning.
Our approach yields the new state-of-the-art neural CQA model.
arXiv Detail & Related papers (2023-01-21T02:34:06Z) - Neural-Symbolic Entangled Framework for Complex Query Answering [22.663509971491138]
We propose a Neural and Entangled framework (ENeSy) for complex query answering.
It enables the neural and symbolic reasoning to enhance each other to alleviate the cascading error and KG incompleteness.
ENeSy achieves the SOTA performance on several benchmarks, especially in the setting of the training model only with the link prediction task.
arXiv Detail & Related papers (2022-09-19T06:07:10Z) - Complex Query Answering with Neural Link Predictors [13.872400132315988]
We propose a framework for efficiently answering complex queries on incomplete Knowledge Graphs.
We translate each query into an end-to-end differentiable objective, where the truth value of each atom is computed by a pre-trained neural predictor.
In our experiments, the proposed approach produces more accurate results than state-of-the-art methods.
arXiv Detail & Related papers (2020-11-06T16:20:49Z) - Less is More: Data-Efficient Complex Question Answering over Knowledge
Bases [26.026065844896465]
We propose the Neural-Symbolic Complex Question Answering (NS-CQA) model, a data-efficient reinforcement learning framework for complex question answering.
Our framework consists of a neural generator and a symbolic executor that transforms a natural-language question into a sequence of primitive actions.
Our model is evaluated on two datasets: CQA, a recent large-scale complex question answering dataset, and WebQuestionsSP, a multi-hop question answering dataset.
arXiv Detail & Related papers (2020-10-29T18:42:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.