Contextualized Attention-based Knowledge Transfer for Spoken
Conversational Question Answering
- URL: http://arxiv.org/abs/2010.11066v4
- Date: Thu, 24 Jun 2021 16:32:18 GMT
- Title: Contextualized Attention-based Knowledge Transfer for Spoken
Conversational Question Answering
- Authors: Chenyu You, Nuo Chen, Yuexian Zou
- Abstract summary: Spoken conversational question answering (SCQA) requires machines to model complex dialogue flow.
We propose CADNet, a novel contextualized attention-based distillation approach.
We conduct extensive experiments on the Spoken-CoQA dataset and demonstrate that our approach achieves remarkable performance.
- Score: 63.72278693825945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spoken conversational question answering (SCQA) requires machines to model
complex dialogue flow given the speech utterances and text corpora. Different
from traditional text question answering (QA) tasks, SCQA involves audio signal
processing, passage comprehension, and contextual understanding. However, ASR
systems introduce unexpected noisy signals to the transcriptions, which result
in performance degradation on SCQA. To overcome the problem, we propose CADNet,
a novel contextualized attention-based distillation approach, which applies
both cross-attention and self-attention to obtain ASR-robust contextualized
embedding representations of the passage and dialogue history for performance
improvements. We also introduce the spoken conventional knowledge distillation
framework to distill the ASR-robust knowledge from the estimated probabilities
of the teacher model to the student. We conduct extensive experiments on the
Spoken-CoQA dataset and demonstrate that our approach achieves remarkable
performance in this task.
Related papers
- SpeechDPR: End-to-End Spoken Passage Retrieval for Open-Domain Spoken Question Answering [76.4510005602893]
Spoken Question Answering (SQA) is essential for machines to reply to user's question by finding the answer span within a given spoken passage.
This paper proposes the first known end-to-end framework, Speech Passage Retriever (SpeechDPR)
SpeechDPR learns a sentence-level semantic representation by distilling knowledge from the cascading model of unsupervised ASR (UASR) and dense text retriever (TDR)
arXiv Detail & Related papers (2024-01-24T14:08:38Z) - On the Impact of Speech Recognition Errors in Passage Retrieval for
Spoken Question Answering [13.013751306590303]
We study the robustness of lexical and dense retrievers against questions with synthetic ASR noise.
We create a new dataset with questions voiced by human users and use their transcriptions to show that the retrieval performance can further degrade when dealing with natural ASR noise instead of synthetic ASR noise.
arXiv Detail & Related papers (2022-09-26T18:29:36Z) - End-to-end Spoken Conversational Question Answering: Task, Dataset and
Model [92.18621726802726]
In spoken question answering, the systems are designed to answer questions from contiguous text spans within the related speech transcripts.
We propose a new Spoken Conversational Question Answering task (SCQA), aiming at enabling the systems to model complex dialogue flows.
Our main objective is to build the system to deal with conversational questions based on the audio recordings, and to explore the plausibility of providing more cues from different modalities with systems in information gathering.
arXiv Detail & Related papers (2022-04-29T17:56:59Z) - An Initial Investigation of Non-Native Spoken Question-Answering [36.89541375786233]
We show that a simple text-based ELECTRA MC model trained on SQuAD2.0 transfers well for spoken question answering tests.
One significant challenge is the lack of appropriately annotated speech corpora to train systems for this task.
Mismatches must be considered between text documents and spoken responses; non-native spoken grammar and written grammar.
arXiv Detail & Related papers (2021-07-09T21:59:16Z) - Self-supervised Dialogue Learning for Spoken Conversational Question
Answering [29.545937716796082]
In spoken conversational question answering (SCQA), the answer to the corresponding question is generated by retrieving and then analyzing a fixed spoken document, including multi-part conversations.
We introduce a self-supervised learning approach, including incoherence discrimination, insertion detection, and question prediction, to explicitly capture the coreference resolution and dialogue coherence.
Our proposed method provides more coherent, meaningful, and appropriate responses, yielding superior performance gains compared to the original pre-trained language models.
arXiv Detail & Related papers (2021-06-04T00:09:38Z) - Knowledge Distillation for Improved Accuracy in Spoken Question
Answering [63.72278693825945]
We devise a training strategy to perform knowledge distillation from spoken documents and written counterparts.
Our work makes a step towards distilling knowledge from the language model as a supervision signal.
Experiments demonstrate that our approach outperforms several state-of-the-art language models on the Spoken-SQuAD dataset.
arXiv Detail & Related papers (2020-10-21T15:18:01Z) - Towards Data Distillation for End-to-end Spoken Conversational Question
Answering [65.124088336738]
We propose a new Spoken Conversational Question Answering task (SCQA)
SCQA aims at enabling QA systems to model complex dialogues flow given the speech utterances and text corpora.
Our main objective is to build a QA system to deal with conversational questions both in spoken and text forms.
arXiv Detail & Related papers (2020-10-18T05:53:39Z) - Improving Readability for Automatic Speech Recognition Transcription [50.86019112545596]
We propose a novel NLP task called ASR post-processing for readability (APR)
APR aims to transform the noisy ASR output into a readable text for humans and downstream tasks while maintaining the semantic meaning of the speaker.
We compare fine-tuned models based on several open-sourced and adapted pre-trained models with the traditional pipeline method.
arXiv Detail & Related papers (2020-04-09T09:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.