Modeling What-to-ask and How-to-ask for Answer-unaware Conversational
Question Generation
- URL: http://arxiv.org/abs/2305.03088v1
- Date: Thu, 4 May 2023 18:06:48 GMT
- Title: Modeling What-to-ask and How-to-ask for Answer-unaware Conversational
Question Generation
- Authors: Xuan Long Do, Bowei Zou, Shafiq Joty, Anh Tai Tran, Liangming Pan,
Nancy F. Chen, Ai Ti Aw
- Abstract summary: What-to-ask and how-to-ask are the two main challenges in the answer-unaware setting.
We present SG-CQG, a two-stage CQG framework.
- Score: 30.086071993793823
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conversational Question Generation (CQG) is a critical task for machines to
assist humans in fulfilling their information needs through conversations. The
task is generally cast into two different settings: answer-aware and
answer-unaware. While the former facilitates the models by exposing the
expected answer, the latter is more realistic and receiving growing attentions
recently. What-to-ask and how-to-ask are the two main challenges in the
answer-unaware setting. To address the first challenge, existing methods mainly
select sequential sentences in context as the rationales. We argue that the
conversation generated using such naive heuristics may not be natural enough as
in reality, the interlocutors often talk about the relevant contents that are
not necessarily sequential in context. Additionally, previous methods decide
the type of question to be generated (boolean/span-based) implicitly. Modeling
the question type explicitly is crucial as the answer, which hints the models
to generate a boolean or span-based question, is unavailable. To this end, we
present SG-CQG, a two-stage CQG framework. For the what-to-ask stage, a
sentence is selected as the rationale from a semantic graph that we construct,
and extract the answer span from it. For the how-to-ask stage, a classifier
determines the target answer type of the question via two explicit control
signals before generating and filtering. In addition, we propose Conv-Distinct,
a novel evaluation metric for CQG, to evaluate the diversity of the generated
conversation from a context. Compared with the existing answer-unaware CQG
models, the proposed SG-CQG achieves state-of-the-art performance.
Related papers
- Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation [64.64849950642619]
We develop an evaluation framework inspired by formal semantics for evaluating text-to-image models.
We show that Davidsonian Scene Graph (DSG) produces atomic and unique questions organized in dependency graphs.
We also present DSG-1k, an open-sourced evaluation benchmark that includes 1,060 prompts.
arXiv Detail & Related papers (2023-10-27T16:20:10Z) - Improving Question Generation with Multi-level Content Planning [70.37285816596527]
This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context.
We propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions.
arXiv Detail & Related papers (2023-10-20T13:57:01Z) - Open-Set Knowledge-Based Visual Question Answering with Inference Paths [79.55742631375063]
The purpose of Knowledge-Based Visual Question Answering (KB-VQA) is to provide a correct answer to the question with the aid of external knowledge bases.
We propose a new retriever-ranker paradigm of KB-VQA, Graph pATH rankER (GATHER for brevity)
Specifically, it contains graph constructing, pruning, and path-level ranking, which not only retrieves accurate answers but also provides inference paths that explain the reasoning process.
arXiv Detail & Related papers (2023-10-12T09:12:50Z) - Event Extraction as Question Generation and Answering [72.04433206754489]
Recent work on Event Extraction has reframed the task as Question Answering (QA)
We propose QGA-EE, which enables a Question Generation (QG) model to generate questions that incorporate rich contextual information instead of using fixed templates.
Experiments show that QGA-EE outperforms all prior single-task-based models on the ACE05 English dataset.
arXiv Detail & Related papers (2023-07-10T01:46:15Z) - HPE:Answering Complex Questions over Text by Hybrid Question Parsing and
Execution [92.69684305578957]
We propose a framework of question parsing and execution on textual QA.
The proposed framework can be viewed as a top-down question parsing followed by a bottom-up answer backtracking.
Our experiments on MuSiQue, 2WikiQA, HotpotQA, and NQ show that the proposed parsing and hybrid execution framework outperforms existing approaches in supervised, few-shot, and zero-shot settings.
arXiv Detail & Related papers (2023-05-12T22:37:06Z) - Keeping the Questions Conversational: Using Structured Representations
to Resolve Dependency in Conversational Question Answering [26.997542897342164]
We propose a novel framework, CONVSR (CONVQA using Structured Representations) for capturing and generating intermediate representations as conversational cues.
We test our model on the QuAC and CANARD datasets and illustrate by experimental results that our proposed framework achieves a better F1 score than the standard question rewriting model.
arXiv Detail & Related papers (2023-04-14T13:42:32Z) - CoHS-CQG: Context and History Selection for Conversational Question
Generation [31.87967788600221]
We propose a two-stage CQG framework, which adopts a CoHS module to shorten the context and history of the input.
Our model achieves state-of-the-art performances on CoQA in both the answer-aware and answer-unaware settings.
arXiv Detail & Related papers (2022-09-14T13:58:52Z) - Co-VQA : Answering by Interactive Sub Question Sequence [18.476819557695087]
This paper proposes a conversation-based VQA framework, which consists of three components: Questioner, Oracle, and Answerer.
To perform supervised learning for each model, we introduce a well-designed method to build a SQS for each question on VQA 2.0 and VQA-CP v2 datasets.
arXiv Detail & Related papers (2022-04-02T15:09:16Z) - Improving Unsupervised Question Answering via Summarization-Informed
Question Generation [47.96911338198302]
Question Generation (QG) is the task of generating a plausible question for a passage, answer> pair.
We make use of freely available news summary data, transforming declarative sentences into appropriate questions using dependency parsing, named entity recognition and semantic role labeling.
The resulting questions are then combined with the original news articles to train an end-to-end neural QG model.
arXiv Detail & Related papers (2021-09-16T13:08:43Z) - Fluent Response Generation for Conversational Question Answering [15.826109118064716]
We propose a method for situating responses within a SEQ2SEQ NLG approach to generate fluent grammatical answer responses.
We use data augmentation to generate training data for an end-to-end system.
arXiv Detail & Related papers (2020-05-21T04:57:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.