Keeping the Questions Conversational: Using Structured Representations
to Resolve Dependency in Conversational Question Answering
- URL: http://arxiv.org/abs/2304.07125v1
- Date: Fri, 14 Apr 2023 13:42:32 GMT
- Title: Keeping the Questions Conversational: Using Structured Representations
to Resolve Dependency in Conversational Question Answering
- Authors: Munazza Zaib and Quan Z. Sheng and Wei Emma Zhang and Adnan Mahmood
- Abstract summary: We propose a novel framework, CONVSR (CONVQA using Structured Representations) for capturing and generating intermediate representations as conversational cues.
We test our model on the QuAC and CANARD datasets and illustrate by experimental results that our proposed framework achieves a better F1 score than the standard question rewriting model.
- Score: 26.997542897342164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Having an intelligent dialogue agent that can engage in conversational
question answering (ConvQA) is now no longer limited to Sci-Fi movies only and
has, in fact, turned into a reality. These intelligent agents are required to
understand and correctly interpret the sequential turns provided as the context
of the given question. However, these sequential questions are sometimes left
implicit and thus require the resolution of some natural language phenomena
such as anaphora and ellipsis. The task of question rewriting has the potential
to address the challenges of resolving dependencies amongst the contextual
turns by transforming them into intent-explicit questions. Nonetheless, the
solution of rewriting the implicit questions comes with some potential
challenges such as resulting in verbose questions and taking conversational
aspect out of the scenario by generating self-contained questions. In this
paper, we propose a novel framework, CONVSR (CONVQA using Structured
Representations) for capturing and generating intermediate representations as
conversational cues to enhance the capability of the QA model to better
interpret the incomplete questions. We also deliberate how the strengths of
this task could be leveraged in a bid to design more engaging and eloquent
conversational agents. We test our model on the QuAC and CANARD datasets and
illustrate by experimental results that our proposed framework achieves a
better F1 score than the standard question rewriting model.
Related papers
- Improving Question Generation with Multi-level Content Planning [70.37285816596527]
This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context.
We propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions.
arXiv Detail & Related papers (2023-10-20T13:57:01Z) - HPE:Answering Complex Questions over Text by Hybrid Question Parsing and
Execution [92.69684305578957]
We propose a framework of question parsing and execution on textual QA.
The proposed framework can be viewed as a top-down question parsing followed by a bottom-up answer backtracking.
Our experiments on MuSiQue, 2WikiQA, HotpotQA, and NQ show that the proposed parsing and hybrid execution framework outperforms existing approaches in supervised, few-shot, and zero-shot settings.
arXiv Detail & Related papers (2023-05-12T22:37:06Z) - Modeling What-to-ask and How-to-ask for Answer-unaware Conversational
Question Generation [30.086071993793823]
What-to-ask and how-to-ask are the two main challenges in the answer-unaware setting.
We present SG-CQG, a two-stage CQG framework.
arXiv Detail & Related papers (2023-05-04T18:06:48Z) - Discourse Analysis via Questions and Answers: Parsing Dependency
Structures of Questions Under Discussion [57.43781399856913]
This work adopts the linguistic framework of Questions Under Discussion (QUD) for discourse analysis.
We characterize relationships between sentences as free-form questions, in contrast to exhaustive fine-grained questions.
We develop the first-of-its-kind QUD that derives a dependency structure of questions over full documents.
arXiv Detail & Related papers (2022-10-12T03:53:12Z) - Conversational QA Dataset Generation with Answer Revision [2.5838973036257458]
We introduce a novel framework that extracts question-worthy phrases from a passage and then generates corresponding questions considering previous conversations.
Our framework revises the extracted answers after generating questions so that answers exactly match paired questions.
arXiv Detail & Related papers (2022-09-23T04:05:38Z) - Multifaceted Improvements for Conversational Open-Domain Question
Answering [54.913313912927045]
We propose a framework with Multifaceted Improvements for Conversational open-domain Question Answering (MICQA)
Firstly, the proposed KL-divergence based regularization is able to lead to a better question understanding for retrieval and answer reading.
Second, the added post-ranker module can push more relevant passages to the top placements and be selected for reader with a two-aspect constrains.
Third, the well designed curriculum learning strategy effectively narrows the gap between the golden passage settings of training and inference, and encourages the reader to find true answer without the golden passage assistance.
arXiv Detail & Related papers (2022-04-01T07:54:27Z) - Discourse Comprehension: A Question Answering Framework to Represent
Sentence Connections [35.005593397252746]
A key challenge in building and evaluating models for discourse comprehension is the lack of annotated data.
This paper presents a novel paradigm that enables scalable data collection targeting the comprehension of news documents.
The resulting corpus, DCQA, consists of 22,430 question-answer pairs across 607 English documents.
arXiv Detail & Related papers (2021-11-01T04:50:26Z) - Asking It All: Generating Contextualized Questions for any Semantic Role [56.724302729493594]
We introduce the task of role question generation, which is given a predicate mention and a passage.
We develop a two-stage model for this task, which first produces a context-independent question prototype for each role.
Our evaluation demonstrates that we generate diverse and well-formed questions for a large, broad-coverage of predicates and roles.
arXiv Detail & Related papers (2021-09-10T12:31:14Z) - Unified Questioner Transformer for Descriptive Question Generation in
Goal-Oriented Visual Dialogue [0.0]
Building an interactive artificial intelligence that can ask questions about the real world is one of the biggest challenges for vision and language problems.
We propose a novel Questioner architecture, called Unified Questioner Transformer (UniQer)
We build a goal-oriented visual dialogue task called CLEVR Ask. It synthesizes complex scenes that require the Questioner to generate descriptive questions.
arXiv Detail & Related papers (2021-06-29T16:36:34Z) - Learn to Resolve Conversational Dependency: A Consistency Training
Framework for Conversational Question Answering [14.382513103948897]
We propose ExCorD (Explicit guidance on how to resolve Conversational Dependency) to enhance the abilities of QA models in comprehending conversational context.
In our experiments, we demonstrate that ExCorD significantly improves the QA models' performance by up to 1.2 F1 on QuAC, and 5.2 F1 on CANARD.
arXiv Detail & Related papers (2021-06-22T07:16:45Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.