A Survey of QUD Models for Discourse Processing
- URL: http://arxiv.org/abs/2502.15573v1
- Date: Thu, 13 Feb 2025 07:04:08 GMT
- Title: A Survey of QUD Models for Discourse Processing
- Authors: Yingxue Fu,
- Abstract summary: Question Under Discussion (QUD) is a linguistic analytic framework.<n>Various models have been proposed for implementing QUD for discourse processing.<n>Some questions that may require further study are suggested.
- Score: 5.439020425819001
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Question Under Discussion (QUD), which is originally a linguistic analytic framework, gains increasing attention in the community of natural language processing over the years. Various models have been proposed for implementing QUD for discourse processing. This survey summarizes these models, with a focus on application to written texts, and examines studies that explore the relationship between QUD and mainstream discourse frameworks, including RST, PDTB and SDRT. Some questions that may require further study are suggested.
Related papers
- On The Landscape of Spoken Language Models: A Comprehensive Survey [144.11278973534203]
spoken language models (SLMs) act as universal speech processing systems.
Work in this area is very diverse, with a range of terminology and evaluation settings.
arXiv Detail & Related papers (2025-04-11T13:40:53Z) - Automated Speaking Assessment of Conversation Tests with Novel Graph-based Modeling on Spoken Response Coherence [11.217656140423207]
ASAC aims to evaluate the overall speaking proficiency of an L2 speaker in a setting where an interlocutor interacts with one or more candidates.<n>We propose a hierarchical graph model that aptly incorporates both broad inter-response interactions and nuanced semantic information.<n>Extensive experimental results on the NICT-JLE benchmark dataset suggest that our proposed modeling approach can yield considerable improvements in prediction accuracy.
arXiv Detail & Related papers (2024-09-11T07:24:07Z) - QUDEVAL: The Evaluation of Questions Under Discussion Discourse Parsing [87.20804165014387]
Questions Under Discussion (QUD) is a versatile linguistic framework in which discourse progresses as continuously asking questions and answering them.
This work introduces the first framework for the automatic evaluation of QUD parsing.
We present QUDeval, a dataset of fine-grained evaluation of 2,190 QUD questions generated from both fine-tuned systems and LLMs.
arXiv Detail & Related papers (2023-10-23T03:03:58Z) - Neural Conversation Models and How to Rein Them in: A Survey of Failures
and Fixes [17.489075240435348]
Recent conditional language models are able to continue any kind of text source in an often seemingly fluent way.
From a linguistic perspective, contributing to a conversation is high.
Recent approaches try to tame the underlying language models at various intervention points.
arXiv Detail & Related papers (2023-08-11T12:07:45Z) - Reasoning with Language Model Prompting: A Survey [86.96133788869092]
Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications.
This paper provides a comprehensive survey of cutting-edge research on reasoning with language model prompting.
arXiv Detail & Related papers (2022-12-19T16:32:42Z) - Discourse Analysis via Questions and Answers: Parsing Dependency
Structures of Questions Under Discussion [57.43781399856913]
This work adopts the linguistic framework of Questions Under Discussion (QUD) for discourse analysis.
We characterize relationships between sentences as free-form questions, in contrast to exhaustive fine-grained questions.
We develop the first-of-its-kind QUD that derives a dependency structure of questions over full documents.
arXiv Detail & Related papers (2022-10-12T03:53:12Z) - Probing Task-Oriented Dialogue Representation from Language Models [106.02947285212132]
This paper investigates pre-trained language models to find out which model intrinsically carries the most informative representation for task-oriented dialogue tasks.
We fine-tune a feed-forward layer as the classifier probe on top of a fixed pre-trained language model with annotated labels in a supervised way.
arXiv Detail & Related papers (2020-10-26T21:34:39Z) - Modeling Topical Relevance for Multi-Turn Dialogue Generation [61.87165077442267]
We propose a new model, named STAR-BTM, to tackle the problem of topic drift in multi-turn dialogue.
The Biterm Topic Model is pre-trained on the whole training dataset. Then, the topic level attention weights are computed based on the topic representation of each context.
Experimental results on both Chinese customer services data and English Ubuntu dialogue data show that STAR-BTM significantly outperforms several state-of-the-art methods.
arXiv Detail & Related papers (2020-09-27T03:33:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.