Iterative Resolution of Prompt Ambiguities Using a Progressive Cutting-Search Approach
- URL: http://arxiv.org/abs/2505.02952v2
- Date: Tue, 01 Jul 2025 11:44:33 GMT
- Title: Iterative Resolution of Prompt Ambiguities Using a Progressive Cutting-Search Approach
- Authors: Fabrizio Marozzo,
- Abstract summary: Generative AI systems have revolutionized human interaction by enabling natural language-based coding and problem solving.<n>However, the inherent ambiguity of natural language often leads to imprecise instructions, forcing users to iteratively test, correct, and resubmit their prompts.<n>We propose an iterative approach that systematically narrows down these ambiguities through a structured series of clarification questions and alternative solution proposals.
- Score: 1.3053649021965603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative AI systems have revolutionized human interaction by enabling natural language-based coding and problem solving. However, the inherent ambiguity of natural language often leads to imprecise instructions, forcing users to iteratively test, correct, and resubmit their prompts. We propose an iterative approach that systematically narrows down these ambiguities through a structured series of clarification questions and alternative solution proposals, illustrated with input/output examples as well. Once every uncertainty is resolved, a final, precise solution is generated. Evaluated on a diverse dataset spanning coding, data analysis, and creative writing, our method demonstrates superior accuracy, competitive resolution times, and higher user satisfaction compared to conventional one-shot solutions, which typically require multiple manual iterations to achieve a correct output.
Related papers
- Image Generation from Contextually-Contradictory Prompts [50.999420029656214]
We propose a stage-aware prompt decomposition framework that guides the denoising process using a sequence of proxy prompts.<n>Our method enables fine-grained semantic control and accurate image generation in the presence of contextual contradictions.
arXiv Detail & Related papers (2025-06-02T17:48:12Z) - QA-prompting: Improving Summarization with Large Language Models using Question-Answering [0.0]
Language Models (LMs) have revolutionized natural language processing, enabling high-quality text generation through prompting and in-context learning.<n>We propose QA-prompting - a simple prompting method for summarization that utilizes question-answering as an intermediate step prior to summary generation.<n>Our method extracts key information and enriches the context of text to mitigate positional biases and improve summarization in a single LM call per task without requiring fine-tuning or pipelining.
arXiv Detail & Related papers (2025-05-20T13:29:36Z) - Text-guided Explorable Image Super-resolution [14.83045604603449]
We propose two approaches for zero-shot text-guided super-resolution.
We show that the proposed approaches result in diverse solutions that match the semantic meaning provided by the text prompt.
arXiv Detail & Related papers (2024-03-02T08:10:54Z) - V-STaR: Training Verifiers for Self-Taught Reasoners [71.53113558733227]
V-STaR trains a verifier using DPO that judges correctness of model-generated solutions.
Running V-STaR for multiple iterations results in progressively better reasoners and verifiers.
arXiv Detail & Related papers (2024-02-09T15:02:56Z) - Understanding and Mitigating Classification Errors Through Interpretable
Token Patterns [58.91023283103762]
Characterizing errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors.
We propose to discover those patterns of tokens that distinguish correct and erroneous predictions.
We show that our method, Premise, performs well in practice.
arXiv Detail & Related papers (2023-11-18T00:24:26Z) - Just Ask One More Time! Self-Agreement Improves Reasoning of Language Models in (Almost) All Scenarios [20.097990701501523]
textbfSelf-Agreement is a generalizable ensemble-optimization method applying in almost all scenarios.
It simultaneously achieves remarkable performance on six public reasoning benchmarks and superior generalization capabilities.
arXiv Detail & Related papers (2023-11-14T13:30:54Z) - GRACE: Discriminator-Guided Chain-of-Thought Reasoning [75.35436025709049]
We propose Guiding chain-of-thought ReAsoning with a CorrectnEss Discriminator (GRACE) to steer the decoding process towards producing correct reasoning steps.
GRACE employs a discriminator trained with a contrastive loss over correct and incorrect steps, which is used during decoding to score next-step candidates.
arXiv Detail & Related papers (2023-05-24T09:16:51Z) - A Contrastive Framework for Neural Text Generation [46.845997620234265]
We show that an underlying reason for model degeneration is the anisotropic distribution of token representations.
We present a contrastive solution: (i) SimCTG, a contrastive training objective to calibrate the model's representation space, and (ii) a decoding method -- contrastive search -- to encourage diversity while maintaining coherence in the generated text.
arXiv Detail & Related papers (2022-02-13T21:46:14Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - A Mutual Information Maximization Approach for the Spurious Solution
Problem in Weakly Supervised Question Answering [60.768146126094955]
Weakly supervised question answering usually has only the final answers as supervision signals.
There may exist many spurious solutions that coincidentally derive the correct answer, but training on such solutions can hurt model performance.
We propose to explicitly exploit such semantic correlations by maximizing the mutual information between question-answer pairs and predicted solutions.
arXiv Detail & Related papers (2021-06-14T05:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.