Fine-tuning Multi-hop Question Answering with Hierarchical Graph Network
- URL: http://arxiv.org/abs/2004.13821v3
- Date: Sat, 27 Aug 2022 10:58:13 GMT
- Title: Fine-tuning Multi-hop Question Answering with Hierarchical Graph Network
- Authors: Guanming Xiong
- Abstract summary: We present a two stage model for multi-hop question answering.
The first stage is a hierarchical graph network, which is used to reason over multi-hop question.
The second stage is a language model fine-tuning task.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a two stage model for multi-hop question answering.
The first stage is a hierarchical graph network, which is used to reason over
multi-hop question and is capable to capture different levels of granularity
using the nature structure(i.e., paragraphs, questions, sentences and entities)
of documents. The reasoning process is convert to node classify task(i.e.,
paragraph nodes and sentences nodes). The second stage is a language model
fine-tuning task. In a word, stage one use graph neural network to select and
concatenate support sentences as one paragraph, and stage two find the answer
span in language model fine-tuning paradigm.
Related papers
- Single Sequence Prediction over Reasoning Graphs for Multi-hop QA [8.442412179333205]
We propose a single-sequence prediction method over a local reasoning graph (model)footnoteCode/Models.
We use a graph neural network to encode this graph structure and fuse the resulting representations into the entity representations of the model.
Our experiments show significant improvements in answer exact-match/F1 scores and faithfulness of grounding in the reasoning path.
arXiv Detail & Related papers (2023-07-01T13:15:09Z) - Conversational Semantic Parsing using Dynamic Context Graphs [68.72121830563906]
We consider the task of conversational semantic parsing over general purpose knowledge graphs (KGs) with millions of entities, and thousands of relation-types.
We focus on models which are capable of interactively mapping user utterances into executable logical forms.
arXiv Detail & Related papers (2023-05-04T16:04:41Z) - Analyzing Vietnamese Legal Questions Using Deep Neural Networks with
Biaffine Classifiers [3.116035935327534]
We propose using deep neural networks to extract important information from Vietnamese legal questions.
Given a legal question in natural language, the goal is to extract all the segments that contain the needed information to answer the question.
arXiv Detail & Related papers (2023-04-27T18:19:24Z) - Graph Attention with Hierarchies for Multi-hop Question Answering [19.398300844233837]
We present two extensions to the SOTA Graph Neural Network (GNN) based model for HotpotQA.
Experiments on HotpotQA demonstrate the efficiency of the proposed modifications.
arXiv Detail & Related papers (2023-01-27T15:49:50Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - Coarse-grained decomposition and fine-grained interaction for multi-hop
question answering [5.88731657602706]
Lots of complex queries require multi-hop reasoning.
Bi-DAF generally captures only the surface semantics of words in complex questions.
We propose a new model architecture for multi-hop question answering.
arXiv Detail & Related papers (2021-01-15T06:56:34Z) - Graph-based Multi-hop Reasoning for Long Text Generation [66.64743847850666]
MRG consists of twoparts, a graph-based multi-hop reasoning module and a path-aware sentence realization module.
Unlike previous black-box models, MRG explicitly infers the skeleton path, which provides explanatory views tounderstand how the proposed model works.
arXiv Detail & Related papers (2020-09-28T12:47:59Z) - Tag and Correct: Question aware Open Information Extraction with
Two-stage Decoding [73.24783466100686]
Question Open IE takes question and passage as inputs, outputting an answer which contains a subject, a predicate, and one or more arguments.
The semistructured answer has two advantages which are more readable and falsifiable compared to span answer.
One is an extractive method which extracts candidate answers from the passage with the Open IE model, and ranks them by matching with questions.
The other is the generative method which uses a sequence to sequence model to generate answers directly.
arXiv Detail & Related papers (2020-09-16T00:58:13Z) - Document Modeling with Graph Attention Networks for Multi-grained
Machine Reading Comprehension [127.3341842928421]
Natural Questions is a new challenging machine reading comprehension benchmark.
It has two-grained answers, which are a long answer (typically a paragraph) and a short answer (one or more entities inside the long answer)
Existing methods treat these two sub-tasks individually during training while ignoring their dependencies.
We present a novel multi-grained machine reading comprehension framework that focuses on modeling documents at their hierarchical nature.
arXiv Detail & Related papers (2020-05-12T14:20:09Z) - Multi-Modal Graph Neural Network for Joint Reasoning on Vision and Scene
Text [93.08109196909763]
We propose a novel VQA approach, Multi-Modal Graph Neural Network (MM-GNN)
It first represents an image as a graph consisting of three sub-graphs, depicting visual, semantic, and numeric modalities respectively.
It then introduces three aggregators which guide the message passing from one graph to another to utilize the contexts in various modalities.
arXiv Detail & Related papers (2020-03-31T05:56:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.