Semantic Sentence Composition Reasoning for Multi-Hop Question Answering
- URL: http://arxiv.org/abs/2203.00160v1
- Date: Tue, 1 Mar 2022 00:35:51 GMT
- Title: Semantic Sentence Composition Reasoning for Multi-Hop Question Answering
- Authors: Qianglong Chen
- Abstract summary: We present a semantic sentence composition reasoning approach for a multi-hop question answering task.
With the combination of factual sentences and multi-stage semantic retrieval, our approach can provide more comprehensive contextual information for model training and reasoning.
Experimental results demonstrate our model is able to incorporate existing pre-trained language models and outperform the existing SOTA method on the QASC task with an improvement of about 9%.
- Score: 1.773120658816994
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Due to the lack of insufficient data, existing multi-hop open domain question
answering systems require to effectively find out relevant supporting facts
according to each question. To alleviate the challenges of semantic factual
sentences retrieval and multi-hop context expansion, we present a semantic
sentence composition reasoning approach for a multi-hop question answering
task, which consists of two key modules: a multi-stage semantic matching module
(MSSM) and a factual sentence composition module (FSC). With the combination of
factual sentences and multi-stage semantic retrieval, our approach can provide
more comprehensive contextual information for model training and reasoning.
Experimental results demonstrate our model is able to incorporate existing
pre-trained language models and outperform the existing SOTA method on the QASC
task with an improvement of about 9%.
Related papers
- Asking Multimodal Clarifying Questions in Mixed-Initiative
Conversational Search [89.1772985740272]
In mixed-initiative conversational search systems, clarifying questions are used to help users who struggle to express their intentions in a single query.
We hypothesize that in scenarios where multimodal information is pertinent, the clarification process can be improved by using non-textual information.
We collect a dataset named Melon that contains over 4k multimodal clarifying questions, enriched with over 14k images.
Several analyses are conducted to understand the importance of multimodal contents during the query clarification phase.
arXiv Detail & Related papers (2024-02-12T16:04:01Z) - Teaching Smaller Language Models To Generalise To Unseen Compositional
Questions [6.9076450524134145]
We propose a combination of multitask pretraining on up to 93 tasks designed to instill diverse reasoning abilities.
We show that performance can be significantly improved by adding retrieval-augmented training datasets.
arXiv Detail & Related papers (2023-08-02T05:00:12Z) - Dual Semantic Knowledge Composed Multimodal Dialog Systems [114.52730430047589]
We propose a novel multimodal task-oriented dialog system named MDS-S2.
It acquires the context related attribute and relation knowledge from the knowledge base.
We also devise a set of latent query variables to distill the semantic information from the composed response representation.
arXiv Detail & Related papers (2023-05-17T06:33:26Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - MoCA: Incorporating Multi-stage Domain Pretraining and Cross-guided
Multimodal Attention for Textbook Question Answering [7.367945534481411]
We propose a novel model named MoCA, which incorporates multi-stage domain pretraining and multimodal cross attention for the Textbook Question Answering task.
The experimental results show the superiority of our model, which outperforms the state-of-the-art methods by 2.21% and 2.43% for validation and test split respectively.
arXiv Detail & Related papers (2021-12-06T07:58:53Z) - Multi-hop Inference for Question-driven Summarization [39.08269647808958]
We propose a novel question-driven abstractive summarization method, Multi-hop Selective Generator (MSG)
MSG incorporates multi-hop reasoning into question-driven summarization and, meanwhile, provide justifications for the generated summaries.
Experimental results show that the proposed method consistently outperforms state-of-the-art methods on two non-factoid QA datasets.
arXiv Detail & Related papers (2020-10-08T02:36:39Z) - SPLAT: Speech-Language Joint Pre-Training for Spoken Language
Understanding [61.02342238771685]
Spoken language understanding requires a model to analyze input acoustic signal to understand its linguistic content and make predictions.
Various pre-training methods have been proposed to learn rich representations from large-scale unannotated speech and text.
We propose a novel semi-supervised learning framework, SPLAT, to jointly pre-train the speech and language modules.
arXiv Detail & Related papers (2020-10-05T19:29:49Z) - Asking Complex Questions with Multi-hop Answer-focused Reasoning [16.01240703148773]
We propose a new task called multihop question generation that asks complex and semantically relevant questions.
To solve the problem, we propose multi-hop answer-focused reasoning on the grounded answer-centric entity graph.
arXiv Detail & Related papers (2020-09-16T00:30:49Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Text Modular Networks: Learning to Decompose Tasks in the Language of
Existing Models [61.480085460269514]
We propose a framework for building interpretable systems that learn to solve complex tasks by decomposing them into simpler ones solvable by existing models.
We use this framework to build ModularQA, a system that can answer multi-hop reasoning questions by decomposing them into sub-questions answerable by a neural factoid single-span QA model and a symbolic calculator.
arXiv Detail & Related papers (2020-09-01T23:45:42Z) - MLR: A Two-stage Conversational Query Rewriting Model with Multi-task
Learning [16.88648782206587]
We propose the conversational query rewriting model - MLR, which is a Multi-task model on sequence Labeling and query Rewriting.
MLR reformulates the multi-turn conversational queries into a single turn query, which conveys the true intention of users concisely.
To train our model, we construct a new Chinese query rewriting dataset and conduct experiments on it.
arXiv Detail & Related papers (2020-04-13T08:04:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.