PipeNet: Question Answering with Semantic Pruning over Knowledge Graphs
- URL: http://arxiv.org/abs/2401.17536v2
- Date: Fri, 17 May 2024 01:06:46 GMT
- Title: PipeNet: Question Answering with Semantic Pruning over Knowledge Graphs
- Authors: Ying Su, Jipeng Zhang, Yangqiu Song, Tong Zhang,
- Abstract summary: We propose a grounding-pruning-reasoning pipeline to prune noisy computation nodes.
We also propose a graph attention network (GAT) based module to reason with the subgraph data.
- Score: 56.5262495514563
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: It is well acknowledged that incorporating explicit knowledge graphs (KGs) can benefit question answering. Existing approaches typically follow a grounding-reasoning pipeline in which entity nodes are first grounded for the query (question and candidate answers), and then a reasoning module reasons over the matched multi-hop subgraph for answer prediction. Although the pipeline largely alleviates the issue of extracting essential information from giant KGs, efficiency is still an open challenge when scaling up hops in grounding the subgraphs. In this paper, we target at finding semantically related entity nodes in the subgraph to improve the efficiency of graph reasoning with KG. We propose a grounding-pruning-reasoning pipeline to prune noisy nodes, remarkably reducing the computation cost and memory usage while also obtaining decent subgraph representation. In detail, the pruning module first scores concept nodes based on the dependency distance between matched spans and then prunes the nodes according to score ranks. To facilitate the evaluation of pruned subgraphs, we also propose a graph attention network (GAT) based module to reason with the subgraph data. Experimental results on CommonsenseQA and OpenBookQA demonstrate the effectiveness of our method.
Related papers
- EiG-Search: Generating Edge-Induced Subgraphs for GNN Explanation in Linear Time [30.44473492282072]
Most existing subgraph-level explainers face efficiency challenges in explaining Graph Neural Networks (GNNs) due to complex search processes.
In this paper, we reveal that inducing subgraph explanations by edges is more comprehensive than other subgraph inducing techniques.
We employ an efficient linear-time search algorithm over the edge-induced subgraphs, where the edges are ranked by an enhanced gradient-based importance.
arXiv Detail & Related papers (2024-05-02T21:55:12Z) - Careful Selection and Thoughtful Discarding: Graph Explicit Pooling
Utilizing Discarded Nodes [53.08068729187698]
We introduce a novel Graph Explicit Pooling (GrePool) method, which selects nodes by explicitly leveraging the relationships between the nodes and final representation vectors crucial for classification.
We conduct comprehensive experiments across 12 widely used datasets to validate our proposed method's effectiveness.
arXiv Detail & Related papers (2023-11-21T14:44:51Z) - Is Rewiring Actually Helpful in Graph Neural Networks? [11.52174067809364]
We propose an evaluation setting based on message-passing models that do not require training to compute node and graph representations.
We perform a systematic experimental comparison on real-world node and graph classification tasks, showing that rewiring the underlying graph rarely does confer a practical benefit for message-passing.
arXiv Detail & Related papers (2023-05-31T10:12:23Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - Dynamic Relevance Graph Network for Knowledge-Aware Question Answering [22.06211725256875]
This work investigates the challenge of learning and reasoning for Commonsense Question Answering given an external source of knowledge.
We propose a novel graph neural network architecture, called Dynamic Relevance Graph Network (DRGN)
DRGN operates on a given KG subgraph based on the question and answers entities and uses the relevance scores between the nodes to establish new edges.
arXiv Detail & Related papers (2022-09-20T18:52:05Z) - Reasoning Graph Networks for Kinship Verification: from Star-shaped to
Hierarchical [85.0376670244522]
We investigate the problem of facial kinship verification by learning hierarchical reasoning graph networks.
We develop a Star-shaped Reasoning Graph Network (S-RGN) to exploit more powerful and flexible capacity.
We also develop a Hierarchical Reasoning Graph Network (H-RGN) to exploit more powerful and flexible capacity.
arXiv Detail & Related papers (2021-09-06T03:16:56Z) - On Explainability of Graph Neural Networks via Subgraph Explorations [48.56936527708657]
We propose a novel method, known as SubgraphX, to explain graph neural networks (GNNs)
Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly.
arXiv Detail & Related papers (2021-02-09T22:12:26Z) - Inverse Graph Identification: Can We Identify Node Labels Given Graph
Labels? [89.13567439679709]
Graph Identification (GI) has long been researched in graph learning and is essential in certain applications.
This paper defines a novel problem dubbed Inverse Graph Identification (IGI)
We propose a simple yet effective method that makes the node-level message passing process using Graph Attention Network (GAT) under the protocol of GI.
arXiv Detail & Related papers (2020-07-12T12:06:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.