Deciding Whether to Ask Clarifying Questions in Large-Scale Spoken
Language Understanding
- URL: http://arxiv.org/abs/2109.12451v1
- Date: Sat, 25 Sep 2021 22:32:10 GMT
- Title: Deciding Whether to Ask Clarifying Questions in Large-Scale Spoken
Language Understanding
- Authors: Joo-Kyung Kim, Guoyin Wang, Sungjin Lee, Young-Bum Kim
- Abstract summary: A large-scale conversational agent can suffer from understanding user utterances with various ambiguities.
We propose a neural self-attentive model that leverages the hypotheses with ambiguities and contextual signals.
- Score: 28.195853603190447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A large-scale conversational agent can suffer from understanding user
utterances with various ambiguities such as ASR ambiguity, intent ambiguity,
and hypothesis ambiguity. When ambiguities are detected, the agent should
engage in a clarifying dialog to resolve the ambiguities before committing to
actions. However, asking clarifying questions for all the ambiguity occurrences
could lead to asking too many questions, essentially hampering the user
experience. To trigger clarifying questions only when necessary for the user
satisfaction, we propose a neural self-attentive model that leverages the
hypotheses with ambiguities and contextual signals. We conduct extensive
experiments on five common ambiguity types using real data from a large-scale
commercial conversational agent and demonstrate significant improvement over a
set of baseline approaches.
Related papers
- Asking Multimodal Clarifying Questions in Mixed-Initiative
Conversational Search [89.1772985740272]
In mixed-initiative conversational search systems, clarifying questions are used to help users who struggle to express their intentions in a single query.
We hypothesize that in scenarios where multimodal information is pertinent, the clarification process can be improved by using non-textual information.
We collect a dataset named Melon that contains over 4k multimodal clarifying questions, enriched with over 14k images.
Several analyses are conducted to understand the importance of multimodal contents during the query clarification phase.
arXiv Detail & Related papers (2024-02-12T16:04:01Z) - Asking the Right Question at the Right Time: Human and Model Uncertainty
Guidance to Ask Clarification Questions [2.3838507844983248]
We show that model uncertainty does not mirror human clarification-seeking behavior.
We propose an approach to generating clarification questions based on model uncertainty estimation.
arXiv Detail & Related papers (2024-02-09T16:15:30Z) - Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs [58.620269228776294]
We propose a task-agnostic framework for resolving ambiguity by asking users clarifying questions.
We evaluate systems across three NLP applications: question answering, machine translation and natural language inference.
We find that intent-sim is robust, demonstrating improvements across a wide range of NLP tasks and LMs.
arXiv Detail & Related papers (2023-11-16T00:18:50Z) - Open-ended Commonsense Reasoning with Unrestricted Answer Scope [47.14397700770702]
Open-ended Commonsense Reasoning is defined as solving a commonsense question without providing 1) a short list of answer candidates and 2) a pre-defined answer scope.
In this work, we leverage pre-trained language models to iteratively retrieve reasoning paths on the external knowledge base.
The reasoning paths can help to identify the most precise answer to the commonsense question.
arXiv Detail & Related papers (2023-10-18T02:45:54Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - We're Afraid Language Models Aren't Modeling Ambiguity [136.8068419824318]
Managing ambiguity is a key part of human language understanding.
We characterize ambiguity in a sentence by its effect on entailment relations with another sentence.
We show that a multilabel NLI model can flag political claims in the wild that are misleading due to ambiguity.
arXiv Detail & Related papers (2023-04-27T17:57:58Z) - Is the Elephant Flying? Resolving Ambiguities in Text-to-Image
Generative Models [64.58271886337826]
We study ambiguities that arise in text-to-image generative models.
We propose a framework to mitigate ambiguities in the prompts given to the systems by soliciting clarifications from the user.
arXiv Detail & Related papers (2022-11-17T17:12:43Z) - Decision-Theoretic Question Generation for Situated Reference
Resolution: An Empirical Study and Computational Model [11.543386846947554]
We analyzed dialogue data from an interactive study in which participants controlled a virtual robot tasked with organizing a set of tools while engaging in dialogue with a live, remote experimenter.
We discovered a number of novel results, including the distribution of question types used to resolve ambiguity and the influence of dialogue-level factors on the reference resolution process.
arXiv Detail & Related papers (2021-10-12T19:23:25Z) - Interactive Question Clarification in Dialogue via Reinforcement
Learning [36.746578601398866]
We propose a reinforcement model to clarify ambiguous questions by suggesting refinements of the original query.
The model is trained using reinforcement learning with a deep policy network.
We evaluate our model based on real-world user clicks and demonstrate significant improvements.
arXiv Detail & Related papers (2020-12-17T06:38:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.