Ambiguity Resolution with Human Feedback for Code Writing Tasks
- URL: http://arxiv.org/abs/2508.14114v1
- Date: Mon, 18 Aug 2025 09:46:26 GMT
- Title: Ambiguity Resolution with Human Feedback for Code Writing Tasks
- Authors: Aditey Nandan, Viraj Kumar,
- Abstract summary: We present and evaluate a prototype system, based on a novel technique (ARHF: Ambiguity Resolution with Human Feedback)<n>We discuss the implications of such assistive systems on Computer Science education.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Specifications for code writing tasks are usually expressed in natural language and may be ambiguous. Programmers must therefore develop the ability to recognize ambiguities in task specifications and resolve them by asking clarifying questions. We present and evaluate a prototype system, based on a novel technique (ARHF: Ambiguity Resolution with Human Feedback), that (1) suggests specific inputs on which a given task specification may be ambiguous, (2) seeks limited human feedback about the code's desired behavior on those inputs, and (3) uses this feedback to generate code that resolves these ambiguities. We evaluate the efficacy of our prototype, and we discuss the implications of such assistive systems on Computer Science education.
Related papers
- Position: Intelligent Coding Systems Should Write Programs with Justifications [9.304020701255093]
We argue that these systems should not only generate code but also produce clear, consistent justifications that bridge model reasoning and user understanding.<n>We advocate exploring neuro-symbolic approaches for justification generation, where symbolic constraints guide behavior during training and program semantics are enriched through neural representations.
arXiv Detail & Related papers (2025-08-08T05:04:47Z) - Hints Help Finding and Fixing Bugs Differently in Python and Text-based Program Representations [28.829745991874816]
We find that the program representation has a significant influence on the users' accuracy at finding and fixing bugs.<n>Different hints help differently depending on the program representation and the user's understanding of the algorithmic task.<n>These findings have implications for designing next-generation programming tools that provide personalized support to users.
arXiv Detail & Related papers (2024-12-17T02:11:53Z) - NoviCode: Generating Programs from Natural Language Utterances by Novices [59.71218039095155]
We present NoviCode, a novel NL Programming task which takes as input an API and a natural language description by a novice non-programmer.
We show that NoviCode is indeed a challenging task in the code synthesis domain, and that generating complex code from non-technical instructions goes beyond the current Text-to-Code paradigm.
arXiv Detail & Related papers (2024-07-15T11:26:03Z) - Creating a Trajectory for Code Writing: Algorithmic Reasoning Tasks [0.923607423080658]
This paper describes instruments and the machine learning models used for validating them.
We have used the data collected in an introductory programming course in the penultimate week of the semester.
Preliminary research suggests ART type instruments can be combined with specific machine learning models to act as an effective learning trajectory.
arXiv Detail & Related papers (2024-04-03T05:07:01Z) - DECIDER: A Dual-System Rule-Controllable Decoding Framework for Language Generation [57.07295906718989]
Constrained decoding approaches aim to control the meaning or style of text generated by pre-trained large language (Ms also PLMs) for various tasks at inference time.<n>These methods often guide plausible continuations by greedily and explicitly selecting targets.<n>Inspired by cognitive dual-process theory, we propose a novel decoding framework DECIDER.
arXiv Detail & Related papers (2024-03-04T11:49:08Z) - GuardRails: Automated Suggestions for Clarifying Ambiguous Purpose Statements [0.0]
Before a function, programmers are encouraged to write a purpose statement i.e., a short, natural-language explanation of what the function computes.
A purpose statement may be ambiguous i.e., it may fail to specify the intended behaviour when two or more inequivalent computations are plausible on certain inputs.
We propose a novel that suggests such inputs using Large Language Models (LLMs)
We create an open-source implementation of our dataset as an extension to Visual Studio Code for the Python programming language.
arXiv Detail & Related papers (2023-12-13T14:56:42Z) - Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs [58.620269228776294]
We propose a task-agnostic framework for resolving ambiguity by asking users clarifying questions.
We evaluate systems across three NLP applications: question answering, machine translation and natural language inference.
We find that intent-sim is robust, demonstrating improvements across a wide range of NLP tasks and LMs.
arXiv Detail & Related papers (2023-11-16T00:18:50Z) - We're Afraid Language Models Aren't Modeling Ambiguity [136.8068419824318]
Managing ambiguity is a key part of human language understanding.
We characterize ambiguity in a sentence by its effect on entailment relations with another sentence.
We show that a multilabel NLI model can flag political claims in the wild that are misleading due to ambiguity.
arXiv Detail & Related papers (2023-04-27T17:57:58Z) - Language Models as Inductive Reasoners [125.99461874008703]
We propose a new paradigm (task) for inductive reasoning, which is to induce natural language rules from natural language facts.
We create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language.
We provide the first and comprehensive analysis of how well pretrained language models can induce natural language rules from natural language facts.
arXiv Detail & Related papers (2022-12-21T11:12:14Z) - Is the Elephant Flying? Resolving Ambiguities in Text-to-Image
Generative Models [64.58271886337826]
We study ambiguities that arise in text-to-image generative models.
We propose a framework to mitigate ambiguities in the prompts given to the systems by soliciting clarifications from the user.
arXiv Detail & Related papers (2022-11-17T17:12:43Z) - Textual Explanations and Critiques in Recommendation Systems [8.406549970145846]
dissertation focuses on two fundamental challenges of addressing this need.
The first involves explanation generation in a scalable and data-driven manner.
The second challenge consists in making explanations actionable, and we refer to it as critiquing.
arXiv Detail & Related papers (2022-05-15T11:59:23Z) - AR-LSAT: Investigating Analytical Reasoning of Text [57.1542673852013]
We study the challenge of analytical reasoning of text and introduce a new dataset consisting of questions from the Law School Admission Test from 1991 to 2016.
We analyze what knowledge understanding and reasoning abilities are required to do well on this task.
arXiv Detail & Related papers (2021-04-14T02:53:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.