Reasoning About Intent for Ambiguous Requests
- URL: http://arxiv.org/abs/2511.10453v1
- Date: Fri, 14 Nov 2025 01:52:18 GMT
- Title: Reasoning About Intent for Ambiguous Requests
- Authors: Irina Saparina, Mirella Lapata,
- Abstract summary: We propose generating multiple interpretation-answer pairs in a single structured response to ambiguous requests.<n>Our models are trained with reinforcement learning and customized reward functions using multiple valid answers as supervision.
- Score: 47.979705857002415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models often respond to ambiguous requests by implicitly committing to one interpretation. Intent misunderstandings can frustrate users and create safety risks. To address this, we propose generating multiple interpretation-answer pairs in a single structured response to ambiguous requests. Our models are trained with reinforcement learning and customized reward functions using multiple valid answers as supervision. Experiments on conversational question answering and semantic parsing demonstrate that our method achieves higher coverage of valid answers than baseline approaches. Human evaluation confirms that predicted interpretations are highly aligned with their answers. Our approach promotes transparency with explicit interpretations, achieves efficiency by requiring only one generation step, and supports downstream applications through its structured output format.
Related papers
- Gaming the Answer Matcher: Examining the Impact of Text Manipulation on Automated Judgment [6.104512852467398]
Automated answer matching shows substantial promise as a scalable and aligned alternative to human evaluation.<n>We investigate whether such tactics deceive answer matching models by prompting examinee models to generate verbose responses.<n>Our results show that these manipulations do not increase scores and often reduce them.
arXiv Detail & Related papers (2025-12-22T17:39:13Z) - Learning to Extract Context for Context-Aware LLM Inference [60.376872353918394]
User prompts to large language models (LLMs) are often ambiguous or under-specified.<n> contextual cues shaped by user intentions, prior knowledge, and risk factors influence what constitutes an appropriate response.<n>We propose a framework that extracts and leverages such contextual information from the user prompt itself.
arXiv Detail & Related papers (2025-12-12T19:10:08Z) - Modeling Future Conversation Turns to Teach LLMs to Ask Clarifying Questions [45.04582353648683]
Large language models (LLMs) must often respond to highly ambiguous user requests.<n>Existing LLMs often respond by presupposing a single interpretation of such ambiguous requests, frustrating users who intended a different interpretation.<n>We propose preference labels by simulating their expected outcomes in future turns.<n>This allows LLMs to learn to ask clarifying questions when it can generate responses that are tailored to each user interpretation in future turns.
arXiv Detail & Related papers (2024-10-17T17:29:04Z) - Answer is All You Need: Instruction-following Text Embedding via
Answering the Question [41.727700155498546]
This paper offers a new viewpoint, which treats the instruction as a question about the input text and encodes the expected answers to obtain the representation accordingly.
Specifically, we propose InBedder that instantiates this embed-via-answering idea by only fine-tuning language models on abstractive question answering tasks.
arXiv Detail & Related papers (2024-02-15T01:02:41Z) - Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs [58.620269228776294]
We propose a task-agnostic framework for resolving ambiguity by asking users clarifying questions.
We evaluate systems across three NLP applications: question answering, machine translation and natural language inference.
We find that intent-sim is robust, demonstrating improvements across a wide range of NLP tasks and LMs.
arXiv Detail & Related papers (2023-11-16T00:18:50Z) - A Search for Prompts: Generating Structured Answers from Contracts [40.99057706243682]
We present a form of legal question answering that seeks to return one (or more) fixed answers for a question about a contract clause.
We discuss our exploration methodology for legal question answering prompts using OpenAI's textitGPT-3.5-Turbo and provide a summary of insights.
arXiv Detail & Related papers (2023-10-16T07:29:38Z) - Answering Ambiguous Questions via Iterative Prompting [84.3426020642704]
In open-domain question answering, due to the ambiguity of questions, multiple plausible answers may exist.
One approach is to directly predict all valid answers, but this can struggle with balancing relevance and diversity.
We present AmbigPrompt to address the imperfections of existing approaches to answering ambiguous questions.
arXiv Detail & Related papers (2023-07-08T04:32:17Z) - TASA: Deceiving Question Answering Models by Twin Answer Sentences
Attack [93.50174324435321]
We present Twin Answer Sentences Attack (TASA), an adversarial attack method for question answering (QA) models.
TASA produces fluent and grammatical adversarial contexts while maintaining gold answers.
arXiv Detail & Related papers (2022-10-27T07:16:30Z) - Discrete Reasoning Templates for Natural Language Understanding [79.07883990966077]
We present an approach that reasons about complex questions by decomposing them to simpler subquestions.
We derive the final answer according to instructions in a predefined reasoning template.
We show that our approach is competitive with the state-of-the-art while being interpretable and requires little supervision.
arXiv Detail & Related papers (2021-04-05T18:56:56Z) - Predict and Use Latent Patterns for Short-Text Conversation [5.757975605648179]
We propose to use more detailed semantic forms, including latent responses and part-of-speech sequences, as the controllable semantics to guide the generation.
Our results show that the richer semantics are not only able to provide informative and diverse responses, but also increase the overall performance of response quality.
arXiv Detail & Related papers (2020-10-27T01:31:42Z) - Generating Dialogue Responses from a Semantic Latent Space [75.18449428414736]
We propose an alternative to the end-to-end classification on vocabulary.
We learn the pair relationship between the prompts and responses as a regression task on a latent space.
Human evaluation showed that learning the task on a continuous space can generate responses that are both relevant and informative.
arXiv Detail & Related papers (2020-10-04T19:06:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.