HeySQuAD: A Spoken Question Answering Dataset
- URL: http://arxiv.org/abs/2304.13689v2
- Date: Tue, 27 Feb 2024 13:57:08 GMT
- Title: HeySQuAD: A Spoken Question Answering Dataset
- Authors: Yijing Wu, SaiKrishna Rallabandi, Ravisutha Srinivasamurthy, Parag
Pravin Dakle, Alolika Gon, Preethi Raghavan
- Abstract summary: This study presents a new large-scale community-shared SQA dataset called HeySQuAD.
Our goal is to measure the ability of machines to accurately understand noisy spoken questions and provide reliable answers.
- Score: 2.3881849082514153
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spoken question answering (SQA) systems are critical for digital assistants
and other real-world use cases, but evaluating their performance is a challenge
due to the importance of human-spoken questions. This study presents a new
large-scale community-shared SQA dataset called HeySQuAD, which includes 76k
human-spoken questions, 97k machine-generated questions, and their
corresponding textual answers from the SQuAD QA dataset. Our goal is to measure
the ability of machines to accurately understand noisy spoken questions and
provide reliable answers. Through extensive testing, we demonstrate that
training with transcribed human-spoken and original SQuAD questions leads to a
significant improvement (12.51%) in answering human-spoken questions compared
to training with only the original SQuAD textual questions. Moreover,
evaluating with a higher-quality transcription can lead to a further
improvement of 2.03%. This research has significant implications for the
development of SQA systems and their ability to meet the needs of users in
real-world scenarios.
Related papers
- Diversity Enhanced Narrative Question Generation for Storybooks [4.043005183192124]
We introduce a multi-question generation model (mQG) capable of generating multiple, diverse, and answerable questions.
To validate the answerability of the generated questions, we employ a SQuAD2.0 fine-tuned question answering model.
mQG shows promising results across various evaluation metrics, among strong baselines.
arXiv Detail & Related papers (2023-10-25T08:10:04Z) - Answering Unanswered Questions through Semantic Reformulations in Spoken
QA [20.216161323866867]
Spoken Question Answering (QA) is a key feature of voice assistants, usually backed by multiple QA systems.
We analyze failed QA requests to identify core challenges: lexical gaps, proposition types, complex syntactic structure, and high specificity.
We propose a Semantic Question Reformulation (SURF) model offering three linguistically-grounded operations (repair, syntactic reshaping, generalization) to rewrite questions to facilitate answering.
arXiv Detail & Related papers (2023-05-27T07:19:27Z) - Modern Question Answering Datasets and Benchmarks: A Survey [5.026863544662493]
Question Answering (QA) is one of the most important natural language processing (NLP) tasks.
It aims using NLP technologies to generate a corresponding answer to a given question based on the massive unstructured corpus.
In this paper, we investigate influential QA datasets that have been released in the era of deep learning.
arXiv Detail & Related papers (2022-06-30T05:53:56Z) - Improving the Question Answering Quality using Answer Candidate
Filtering based on Natural-Language Features [117.44028458220427]
We address the problem of how the Question Answering (QA) quality of a given system can be improved.
Our main contribution is an approach capable of identifying wrong answers provided by a QA system.
In particular, our approach has shown its potential while removing in many cases the majority of incorrect answers.
arXiv Detail & Related papers (2021-12-10T11:09:44Z) - QAConv: Question Answering on Informative Conversations [85.2923607672282]
We focus on informative conversations including business emails, panel discussions, and work channels.
In total, we collect 34,204 QA pairs, including span-based, free-form, and unanswerable questions.
arXiv Detail & Related papers (2021-05-14T15:53:05Z) - A Dataset of Information-Seeking Questions and Answers Anchored in
Research Papers [66.11048565324468]
We present a dataset of 5,049 questions over 1,585 Natural Language Processing papers.
Each question is written by an NLP practitioner who read only the title and abstract of the corresponding paper, and the question seeks information present in the full text.
We find that existing models that do well on other QA tasks do not perform well on answering these questions, underperforming humans by at least 27 F1 points when answering them from entire papers.
arXiv Detail & Related papers (2021-05-07T00:12:34Z) - NoiseQA: Challenge Set Evaluation for User-Centric Question Answering [68.67783808426292]
We show that components in the pipeline that precede an answering engine can introduce varied and considerable sources of error.
We conclude that there is substantial room for progress before QA systems can be effectively deployed.
arXiv Detail & Related papers (2021-02-16T18:35:29Z) - Summary-Oriented Question Generation for Informational Queries [23.72999724312676]
We aim to produce self-explanatory questions that focus on main document topics and are answerable with variable length passages as appropriate.
Our model shows SOTA performance of SQ generation on the NQ dataset (20.1 BLEU-4).
We further apply our model on out-of-domain news articles, evaluating with a QA system due to the lack of gold questions and demonstrate that our model produces better SQs for news articles -- with further confirmation via a human evaluation.
arXiv Detail & Related papers (2020-10-19T17:30:08Z) - Towards Data Distillation for End-to-end Spoken Conversational Question
Answering [65.124088336738]
We propose a new Spoken Conversational Question Answering task (SCQA)
SCQA aims at enabling QA systems to model complex dialogues flow given the speech utterances and text corpora.
Our main objective is to build a QA system to deal with conversational questions both in spoken and text forms.
arXiv Detail & Related papers (2020-10-18T05:53:39Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.