Conditional Logical Message Passing Transformer for Complex Query
Answering
- URL: http://arxiv.org/abs/2402.12954v1
- Date: Tue, 20 Feb 2024 12:17:01 GMT
- Title: Conditional Logical Message Passing Transformer for Complex Query
Answering
- Authors: Chongzhi Zhang, Zhiping Peng, Junhao Zheng, Qianli Ma
- Abstract summary: We propose a new state-of-the-art neural CQA model, Conditional Logical Message Passing Transformer (CLMPT)
We empirically verified that this approach can reduce computational costs without affecting performance.
Experimental results show that CLMPT is a new state-of-the-art neural CQA model.
- Score: 24.563963177590434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Complex Query Answering (CQA) over Knowledge Graphs (KGs) is a challenging
task. Given that KGs are usually incomplete, neural models are proposed to
solve CQA by performing multi-hop logical reasoning. However, most of them
cannot perform well on both one-hop and multi-hop queries simultaneously.
Recent work proposes a logical message passing mechanism based on the
pre-trained neural link predictors. While effective on both one-hop and
multi-hop queries, it ignores the difference between the constant and variable
nodes in a query graph. In addition, during the node embedding update stage,
this mechanism cannot dynamically measure the importance of different messages,
and whether it can capture the implicit logical dependencies related to a node
and received messages remains unclear. In this paper, we propose Conditional
Logical Message Passing Transformer (CLMPT), which considers the difference
between constants and variables in the case of using pre-trained neural link
predictors and performs message passing conditionally on the node type. We
empirically verified that this approach can reduce computational costs without
affecting performance. Furthermore, CLMPT uses the transformer to aggregate
received messages and update the corresponding node embedding. Through the
self-attention mechanism, CLMPT can assign adaptive weights to elements in an
input set consisting of received messages and the corresponding node and
explicitly model logical dependencies between various elements. Experimental
results show that CLMPT is a new state-of-the-art neural CQA model.
Related papers
- Pathformer: Recursive Path Query Encoding for Complex Logical Query Answering [20.521886749524814]
We propose a neural one-point embedding method called Pathformer based on the tree-like computation graph, i.e., query tree.
Specifically, Pathformer decomposes the query computation tree into path query sequences by branches.
This allows Pathformer to fully utilize future context information to explicitly model the complex interactions between various parts of a path query.
arXiv Detail & Related papers (2024-06-21T06:02:58Z) - ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained
Language Models for Question Answering over Knowledge Graph [142.42275983201978]
We propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning.
We also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions.
Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data.
arXiv Detail & Related papers (2023-12-30T07:18:54Z) - Query2Triple: Unified Query Encoding for Answering Diverse Complex
Queries over Knowledge Graphs [29.863085746761556]
We propose Query to Triple (Q2T), a novel approach that decouples the training for simple and complex queries.
Our proposed Q2T is not only efficient to train, but also modular, thus easily adaptable to various neural link predictors.
arXiv Detail & Related papers (2023-10-17T13:13:30Z) - Tractable Bounding of Counterfactual Queries by Knowledge Compilation [51.47174989680976]
We discuss the problem of bounding partially identifiable queries, such as counterfactuals, in Pearlian structural causal models.
A recently proposed iterated EM scheme yields an inner approximation of those bounds by sampling the initialisation parameters.
We show how a single symbolic knowledge compilation allows us to obtain the circuit structure with symbolic parameters to be replaced by their actual values.
arXiv Detail & Related papers (2023-10-05T07:10:40Z) - Single-Stage Visual Relationship Learning using Conditional Queries [60.90880759475021]
TraCQ is a new formulation for scene graph generation that avoids the multi-task learning problem and the entity pair distribution.
We employ a DETR-based encoder-decoder conditional queries to significantly reduce the entity label space as well.
Experimental results show that TraCQ not only outperforms existing single-stage scene graph generation methods, it also beats many state-of-the-art two-stage methods on the Visual Genome dataset.
arXiv Detail & Related papers (2023-06-09T06:02:01Z) - Sequential Query Encoding For Complex Query Answering on Knowledge
Graphs [31.40820604209387]
We propose sequential query encoding (SQE) as an alternative to encode queries for knowledge graph (KG) reasoning.
SQE first uses a search-based algorithm to linearize the computational graph to a sequence of tokens and then uses a sequence encoder to compute its vector representation.
Despite its simplicity, SQE demonstrates state-of-the-art neural query encoding performance on FB15k, FB15k-237, and NELL.
arXiv Detail & Related papers (2023-02-25T16:33:53Z) - Logical Message Passing Networks with One-hop Inference on Atomic
Formulas [57.47174363091452]
We propose a framework for complex query answering that decomposes the Knowledge Graph embeddings from neural set operators.
On top of the query graph, we propose the Logical Message Passing Neural Network (LMPNN) that connects the local one-hop inferences on atomic formulas to the global logical reasoning.
Our approach yields the new state-of-the-art neural CQA model.
arXiv Detail & Related papers (2023-01-21T02:34:06Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - Neural-Symbolic Entangled Framework for Complex Query Answering [22.663509971491138]
We propose a Neural and Entangled framework (ENeSy) for complex query answering.
It enables the neural and symbolic reasoning to enhance each other to alleviate the cascading error and KG incompleteness.
ENeSy achieves the SOTA performance on several benchmarks, especially in the setting of the training model only with the link prediction task.
arXiv Detail & Related papers (2022-09-19T06:07:10Z) - Conversational Question Reformulation via Sequence-to-Sequence
Architectures and Pretrained Language Models [56.268862325167575]
This paper presents an empirical study of conversational question reformulation (CQR) with sequence-to-sequence architectures and pretrained language models (PLMs)
We leverage PLMs to address the strong token-to-token independence assumption made in the common objective, maximum likelihood estimation, for the CQR task.
We evaluate fine-tuned PLMs on the recently-introduced CANARD dataset as an in-domain task and validate the models using data from the TREC 2019 CAsT Track as an out-domain task.
arXiv Detail & Related papers (2020-04-04T11:07:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.