GNN is a Counter? Revisiting GNN for Question Answering
- URL: http://arxiv.org/abs/2110.03192v1
- Date: Thu, 7 Oct 2021 05:44:52 GMT
- Title: GNN is a Counter? Revisiting GNN for Question Answering
- Authors: Kuan Wang, Yuyu Zhang, Diyi Yang, Le Song and Tao Qin
- Abstract summary: State-of-the-art Question Answering (QA) systems commonly use pre-trained language models (LMs) to access knowledge encoded in LMs.
elaborately designed modules based on Graph Neural Networks (GNNs) to perform reasoning over knowledge graphs (KGs)
Our work reveals that existing knowledge-aware GNN modules may only carry out some simple reasoning such as counting.
- Score: 105.8253992750951
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Question Answering (QA) has been a long-standing research topic in AI and NLP
fields, and a wealth of studies have been conducted to attempt to equip QA
systems with human-level reasoning capability. To approximate the complicated
human reasoning process, state-of-the-art QA systems commonly use pre-trained
language models (LMs) to access knowledge encoded in LMs together with
elaborately designed modules based on Graph Neural Networks (GNNs) to perform
reasoning over knowledge graphs (KGs). However, many problems remain open
regarding the reasoning functionality of these GNN-based modules. Can these
GNN-based modules really perform a complex reasoning process? Are they under-
or over-complicated for QA? To open the black box of GNN and investigate these
problems, we dissect state-of-the-art GNN modules for QA and analyze their
reasoning capability. We discover that even a very simple graph neural counter
can outperform all the existing GNN modules on CommonsenseQA and OpenBookQA,
two popular QA benchmark datasets which heavily rely on knowledge-aware
reasoning. Our work reveals that existing knowledge-aware GNN modules may only
carry out some simple reasoning such as counting. It remains a challenging open
problem to build comprehensive reasoning modules for knowledge-powered QA.
Related papers
- From Graphs to Qubits: A Critical Review of Quantum Graph Neural Networks [56.51893966016221]
Quantum Graph Neural Networks (QGNNs) represent a novel fusion of quantum computing and Graph Neural Networks (GNNs)
This paper critically reviews the state-of-the-art in QGNNs, exploring various architectures.
We discuss their applications across diverse fields such as high-energy physics, molecular chemistry, finance and earth sciences, highlighting the potential for quantum advantage.
arXiv Detail & Related papers (2024-08-12T22:53:14Z) - Systematic Reasoning About Relational Domains With Graph Neural Networks [17.49288661342947]
We focus on reasoning in relational domains, where the use of Graph Neural Networks (GNNs) seems like a natural choice.
Previous work on reasoning with GNNs has shown that such models tend to fail when presented with test examples that require longer inference chains than those seen during training.
This suggests that GNNs lack the ability to generalize from training examples in a systematic way.
arXiv Detail & Related papers (2024-07-24T16:17:15Z) - GNN-RAG: Graph Neural Retrieval for Large Language Model Reasoning [21.057810495833063]
We introduce GNN-RAG, a novel method for combining language understanding abilities of LLMs with the reasoning abilities of GNNs in a retrieval-augmented generation (RAG) style.
In our GNN-RAG framework, the GNN acts as a dense subgraph reasoner to extract useful graph information.
Experiments show that GNN-RAG achieves state-of-the-art performance in two widely used KGQA benchmarks.
arXiv Detail & Related papers (2024-05-30T15:14:24Z) - ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained
Language Models for Question Answering over Knowledge Graph [142.42275983201978]
We propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning.
We also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions.
Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data.
arXiv Detail & Related papers (2023-12-30T07:18:54Z) - Logical Message Passing Networks with One-hop Inference on Atomic
Formulas [57.47174363091452]
We propose a framework for complex query answering that decomposes the Knowledge Graph embeddings from neural set operators.
On top of the query graph, we propose the Logical Message Passing Neural Network (LMPNN) that connects the local one-hop inferences on atomic formulas to the global logical reasoning.
Our approach yields the new state-of-the-art neural CQA model.
arXiv Detail & Related papers (2023-01-21T02:34:06Z) - RELIANT: Fair Knowledge Distillation for Graph Neural Networks [39.22568244059485]
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks.
Knowledge Distillation (KD) is a common solution to compress GNNs.
We propose a principled framework named RELIANT to mitigate the bias exhibited by the student model.
arXiv Detail & Related papers (2023-01-03T15:21:24Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - Neural-Symbolic Models for Logical Queries on Knowledge Graphs [17.290758383645567]
We propose Graph Neural Network Query Executor (GNN-QE), a neural-symbolic model that enjoys the advantages of both worlds.
GNN-QE decomposes a complex FOL query into relation projections and logical operations over fuzzy sets.
Experiments on 3 datasets show that GNN-QE significantly improves over previous state-of-the-art models in answering FOL queries.
arXiv Detail & Related papers (2022-05-16T18:39:04Z) - Question Answering over Knowledge Bases by Leveraging Semantic Parsing
and Neuro-Symbolic Reasoning [73.00049753292316]
We propose a semantic parsing and reasoning-based Neuro-Symbolic Question Answering(NSQA) system.
NSQA achieves state-of-the-art performance on QALD-9 and LC-QuAD 1.0.
arXiv Detail & Related papers (2020-12-03T05:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.