Integrating Subgraph-aware Relation and DirectionReasoning for Question
Answering
- URL: http://arxiv.org/abs/2104.00218v1
- Date: Thu, 1 Apr 2021 03:04:36 GMT
- Title: Integrating Subgraph-aware Relation and DirectionReasoning for Question
Answering
- Authors: Xu Wang, Shuai Zhao, Bo Cheng, Jiale Han, Yingting Li, Hao Yang, Ivan
Sekulic, Guoshun Nan
- Abstract summary: Question Answering (QA) models over Knowledge Bases (KBs) are capable of providing more precise answers by utilizing relation information among entities.
We propose a novel neural model, Relation-updated Direction-guided Answer Selector (RDAS), which converts relations in each subgraph to additional nodes to learn structure information.
- Score: 26.254684189099496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Question Answering (QA) models over Knowledge Bases (KBs) are capable of
providing more precise answers by utilizing relation information among
entities. Although effective, most of these models solely rely on fixed
relation representations to obtain answers for different question-related KB
subgraphs. Hence, the rich structured information of these subgraphs may be
overlooked by the relation representation vectors. Meanwhile, the direction
information of reasoning, which has been proven effective for the answer
prediction on graphs, has not been fully explored in existing work. To address
these challenges, we propose a novel neural model, Relation-updated
Direction-guided Answer Selector (RDAS), which converts relations in each
subgraph to additional nodes to learn structure information. Additionally, we
utilize direction information to enhance the reasoning ability. Experimental
results show that our model yields substantial improvements on two widely used
datasets.
Related papers
- G-SAP: Graph-based Structure-Aware Prompt Learning over Heterogeneous Knowledge for Commonsense Reasoning [8.02547453169677]
We propose a novel Graph-based Structure-Aware Prompt Learning Model for commonsense reasoning, named G-SAP.
In particular, an evidence graph is constructed by integrating multiple knowledge sources, i.e. ConceptNet, Wikipedia, and Cambridge Dictionary.
The results reveal a significant advancement over the existing models, especially, with 6.12% improvement over the SoTA LM+GNNs model on the OpenbookQA dataset.
arXiv Detail & Related papers (2024-05-09T08:28:12Z) - Exploiting Abstract Meaning Representation for Open-Domain Question
Answering [18.027908933572203]
We utilize Abstract Meaning Representation (AMR) graphs to assist the model in understanding complex semantic information.
Results from Natural Questions (NQ) and TriviaQA (TQ) demonstrate that our GST method can significantly improve performance.
arXiv Detail & Related papers (2023-05-26T16:00:16Z) - Methods for Recovering Conditional Independence Graphs: A Survey [2.2721854258621064]
Conditional Independence (CI) graphs are used to gain insights about feature relationships.
We list out different methods and study the advances in techniques developed to recover CI graphs.
arXiv Detail & Related papers (2022-11-13T06:11:38Z) - On Neural Architecture Inductive Biases for Relational Tasks [76.18938462270503]
We introduce a simple architecture based on similarity-distribution scores which we name Compositional Network generalization (CoRelNet)
We find that simple architectural choices can outperform existing models in out-of-distribution generalizations.
arXiv Detail & Related papers (2022-06-09T16:24:01Z) - Generative Relation Linking for Question Answering over Knowledge Bases [12.778133758613773]
We propose a novel approach for relation linking framing it as a generative problem.
We extend such sequence-to-sequence models with the idea of infusing structured data from the target knowledge base.
We train the model with the aim to generate a structured output consisting of a list of argument-relation pairs, enabling a knowledge validation step.
arXiv Detail & Related papers (2021-08-16T20:33:43Z) - Graph Information Bottleneck [77.21967740646784]
Graph Neural Networks (GNNs) provide an expressive way to fuse information from network structure and node features.
Inheriting from the general Information Bottleneck (IB), GIB aims to learn the minimal sufficient representation for a given task.
We show that our proposed models are more robust than state-of-the-art graph defense models.
arXiv Detail & Related papers (2020-10-24T07:13:00Z) - Learning from Context or Names? An Empirical Study on Neural Relation
Extraction [112.06614505580501]
We study the effect of two main information sources in text: textual context and entity mentions (names)
We propose an entity-masked contrastive pre-training framework for relation extraction (RE)
Our framework can improve the effectiveness and robustness of neural models in different RE scenarios.
arXiv Detail & Related papers (2020-10-05T11:21:59Z) - Type-augmented Relation Prediction in Knowledge Graphs [65.88395564516115]
We propose a type-augmented relation prediction (TaRP) method, where we apply both the type information and instance-level information for relation prediction.
Our proposed TaRP method achieves significantly better performance than state-of-the-art methods on four benchmark datasets.
arXiv Detail & Related papers (2020-09-16T21:14:18Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - Neural Relation Prediction for Simple Question Answering over Knowledge
Graph [0.0]
We propose an instance-based method to capture the underlying relation of question and to this aim, we detect matching paraphrases of a new question.
Our experiments on the SimpleQuestions dataset show that the proposed model achieves better accuracy compared to the state-of-the-art relation extraction models.
arXiv Detail & Related papers (2020-02-18T16:41:24Z) - Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs [96.73259297063619]
We consider a novel formulation, zero-shot learning, to free this cumbersome curation.
For newly-added relations, we attempt to learn their semantic features from their text descriptions.
We leverage Generative Adrial Networks (GANs) to establish the connection between text and knowledge graph domain.
arXiv Detail & Related papers (2020-01-08T01:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.