Conversational Question Answering over Knowledge Graphs with Transformer
and Graph Attention Networks
- URL: http://arxiv.org/abs/2104.01569v1
- Date: Sun, 4 Apr 2021 09:21:50 GMT
- Title: Conversational Question Answering over Knowledge Graphs with Transformer
and Graph Attention Networks
- Authors: Endri Kacupaj, Joan Plepi, Kuldeep Singh, Harsh Thakkar, Jens Lehmann,
Maria Maleshkova
- Abstract summary: This paper addresses the task of (complex) conversational question answering over a knowledge graph.
We propose LASAGNE (muLti-task semAntic parSing with trAnsformer and Graph atteNtion nEtworks)
We show that LASAGNE improves the F1-score on eight out of ten question types.
- Score: 4.550927535779247
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper addresses the task of (complex) conversational question answering
over a knowledge graph. For this task, we propose LASAGNE (muLti-task semAntic
parSing with trAnsformer and Graph atteNtion nEtworks). It is the first
approach, which employs a transformer architecture extended with Graph
Attention Networks for multi-task neural semantic parsing. LASAGNE uses a
transformer model for generating the base logical forms, while the Graph
Attention model is used to exploit correlations between (entity) types and
predicates to produce node representations. LASAGNE also includes a novel
entity recognition module which detects, links, and ranks all relevant entities
in the question context. We evaluate LASAGNE on a standard dataset for complex
sequential question answering, on which it outperforms existing baseline
averages on all question types. Specifically, we show that LASAGNE improves the
F1-score on eight out of ten question types; in some cases, the increase in
F1-score is more than 20% compared to the state of the art.
Related papers
- A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Joint Entity and Relation Extraction with Span Pruning and Hypergraph
Neural Networks [58.43972540643903]
We propose HyperGraph neural network for ERE ($hgnn$), which is built upon the PL-marker (a state-of-the-art marker-based pipleline model)
To alleviate error propagation,we use a high-recall pruner mechanism to transfer the burden of entity identification and labeling from the NER module to the joint module of our model.
Experiments on three widely used benchmarks for ERE task show significant improvements over the previous state-of-the-art PL-marker.
arXiv Detail & Related papers (2023-10-26T08:36:39Z) - Graph Attention with Hierarchies for Multi-hop Question Answering [19.398300844233837]
We present two extensions to the SOTA Graph Neural Network (GNN) based model for HotpotQA.
Experiments on HotpotQA demonstrate the efficiency of the proposed modifications.
arXiv Detail & Related papers (2023-01-27T15:49:50Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Question-Answer Sentence Graph for Joint Modeling Answer Selection [122.29142965960138]
We train and integrate state-of-the-art (SOTA) models for computing scores between question-question, question-answer, and answer-answer pairs.
Online inference is then performed to solve the AS2 task on unseen queries.
arXiv Detail & Related papers (2022-02-16T05:59:53Z) - Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in
Visual Question Answering [71.6781118080461]
We propose a Graph Matching Attention (GMA) network for Visual Question Answering (VQA) task.
firstly, it builds graph for the image, but also constructs graph for the question in terms of both syntactic and embedding information.
Next, we explore the intra-modality relationships by a dual-stage graph encoder and then present a bilateral cross-modality graph matching attention to infer the relationships between the image and the question.
Experiments demonstrate that our network achieves state-of-the-art performance on the GQA dataset and the VQA 2.0 dataset.
arXiv Detail & Related papers (2021-12-14T10:01:26Z) - Context Transformer with Stacked Pointer Networks for Conversational
Question Answering over Knowledge Graphs [4.6574525260746285]
We propose a novel framework named CARTON, which performs multi-task semantic parsing for handling the problem of conversational question answering over a large-scale knowledge graph.
Our framework consists of a stack of pointer networks as an extension of a context transformer model for parsing the input question and the dialog history.
We evaluate CARTON on a standard dataset for complex sequential question answering on which CARTON outperforms all baselines.
arXiv Detail & Related papers (2021-03-13T18:16:43Z) - Iterative Context-Aware Graph Inference for Visual Dialog [126.016187323249]
We propose a novel Context-Aware Graph (CAG) neural network.
Each node in the graph corresponds to a joint semantic feature, including both object-based (visual) and history-related (textual) context representations.
arXiv Detail & Related papers (2020-04-05T13:09:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.