ConfNet2Seq: Full Length Answer Generation from Spoken Questions
- URL: http://arxiv.org/abs/2006.05163v2
- Date: Thu, 11 Jun 2020 08:39:41 GMT
- Title: ConfNet2Seq: Full Length Answer Generation from Spoken Questions
- Authors: Vaishali Pal, Manish Shrivastava and Laurent Besacier
- Abstract summary: We propose a novel system to generate full length natural language answers from spoken questions and factoid answers.
The spoken sequence is compactly represented as a confusion network extracted from a pre-trained Automatic Speech Recognizer.
We release a large-scale dataset of 259,788 samples of spoken questions, their factoid answers and corresponding full-length textual answers.
- Score: 35.5617271023687
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conversational and task-oriented dialogue systems aim to interact with the
user using natural responses through multi-modal interfaces, such as text or
speech. These desired responses are in the form of full-length natural answers
generated over facts retrieved from a knowledge source. While the task of
generating natural answers to questions from an answer span has been widely
studied, there has been little research on natural sentence generation over
spoken content. We propose a novel system to generate full length natural
language answers from spoken questions and factoid answers. The spoken sequence
is compactly represented as a confusion network extracted from a pre-trained
Automatic Speech Recognizer. This is the first attempt towards generating
full-length natural answers from a graph input(confusion network) to the best
of our knowledge. We release a large-scale dataset of 259,788 samples of spoken
questions, their factoid answers and corresponding full-length textual answers.
Following our proposed approach, we achieve comparable performance with best
ASR hypothesis.
Related papers
- Question Answering in Natural Language: the Special Case of Temporal
Expressions [0.0]
Our work aims to leverage a popular approach used for general question answering, answer extraction, in order to find answers to temporal questions within a paragraph.
To train our model, we propose a new dataset, inspired by SQuAD, specifically tailored to provide rich temporal information.
Our evaluation shows that a deep learning model trained to perform pattern matching, often used in general question answering, can be adapted to temporal question answering.
arXiv Detail & Related papers (2023-11-23T16:26:24Z) - Prompt Guided Copy Mechanism for Conversational Question Answering [30.247806772658635]
We propose a novel prompt-guided copy mechanism to improve the fluency and appropriateness of the extracted answers.
Our approach uses prompts to link questions to answers and employs attention to guide the copy mechanism to verify the naturalness of extracted answers.
arXiv Detail & Related papers (2023-08-07T09:15:03Z) - Concise Answers to Complex Questions: Summarization of Long-form Answers [27.190319030219285]
We conduct a user study on summarized answers generated from state-of-the-art models and our newly proposed extract-and-decontextualize approach.
We find a large proportion of long-form answers can be adequately summarized by at least one system, while complex and implicit answers are challenging to compress.
We observe that decontextualization improves the quality of the extractive summary, exemplifying its potential in the summarization task.
arXiv Detail & Related papers (2023-05-30T17:59:33Z) - Conversational QA Dataset Generation with Answer Revision [2.5838973036257458]
We introduce a novel framework that extracts question-worthy phrases from a passage and then generates corresponding questions considering previous conversations.
Our framework revises the extracted answers after generating questions so that answers exactly match paired questions.
arXiv Detail & Related papers (2022-09-23T04:05:38Z) - End-to-end Spoken Conversational Question Answering: Task, Dataset and
Model [92.18621726802726]
In spoken question answering, the systems are designed to answer questions from contiguous text spans within the related speech transcripts.
We propose a new Spoken Conversational Question Answering task (SCQA), aiming at enabling the systems to model complex dialogue flows.
Our main objective is to build the system to deal with conversational questions based on the audio recordings, and to explore the plausibility of providing more cues from different modalities with systems in information gathering.
arXiv Detail & Related papers (2022-04-29T17:56:59Z) - How Do We Answer Complex Questions: Discourse Structure of Long-form
Answers [51.973363804064704]
We study the functional structure of long-form answers collected from three datasets.
Our main goal is to understand how humans organize information to craft complex answers.
Our work can inspire future research on discourse-level modeling and evaluation of long-form QA systems.
arXiv Detail & Related papers (2022-03-21T15:14:10Z) - A Graph-guided Multi-round Retrieval Method for Conversational
Open-domain Question Answering [52.041815783025186]
We propose a novel graph-guided retrieval method to model the relations among answers across conversation turns.
We also propose to incorporate the multi-round relevance feedback technique to explore the impact of the retrieval context on current question understanding.
arXiv Detail & Related papers (2021-04-17T04:39:41Z) - Towards Data Distillation for End-to-end Spoken Conversational Question
Answering [65.124088336738]
We propose a new Spoken Conversational Question Answering task (SCQA)
SCQA aims at enabling QA systems to model complex dialogues flow given the speech utterances and text corpora.
Our main objective is to build a QA system to deal with conversational questions both in spoken and text forms.
arXiv Detail & Related papers (2020-10-18T05:53:39Z) - Generating Dialogue Responses from a Semantic Latent Space [75.18449428414736]
We propose an alternative to the end-to-end classification on vocabulary.
We learn the pair relationship between the prompts and responses as a regression task on a latent space.
Human evaluation showed that learning the task on a continuous space can generate responses that are both relevant and informative.
arXiv Detail & Related papers (2020-10-04T19:06:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.