Intent Preserving Generation of Diverse and Idiomatic (Code-)Artifacts
- URL: http://arxiv.org/abs/2508.03642v1
- Date: Tue, 05 Aug 2025 16:54:15 GMT
- Title: Intent Preserving Generation of Diverse and Idiomatic (Code-)Artifacts
- Authors: Oliver Westphal,
- Abstract summary: We present an approach where instead of writing monolithic generators for multiple connected artifacts one specifies a small set of abstract building blocks.<n>The intended structure of the resulting artifacts is specified as a composition of the small abstract building blocks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When automatically generating programming exercise tasks one often also needs to automatically generate programs. At the very least when providing sample solutions is part of automated feedback. But programs can also be used as part of the exercise task description to communicate a task's requirements. Writing good program generators that produce varied yet idiomatic code while being easily adaptable for new tasks is challenging. The challenges are intensified if task generation requires additional artifacts, like a more general behavior specification for testing or additional textual descriptions. Manually writing generators for multiple different but strongly related artifacts gets complicated quickly. We present an approach where instead of writing monolithic generators for multiple connected artifacts one specifies a small set of abstract building blocks and for each such building block defines sets of concrete realizations for various kinds of artifacts. Then the intended structure of the resulting artifacts is specified as a composition of the small abstract building blocks. This abstract description then serves as the common source from which related artifacts can be derived automatically. The approach is generic in the kind of artifacts it can produce and is therefore adaptable to a wide range of contexts.
Related papers
- CodeDiffuser: Attention-Enhanced Diffusion Policy via VLM-Generated Code for Instruction Ambiguity [23.77040677368575]
We introduce a novel robotic manipulation framework that can accomplish tasks specified by potentially ambiguous natural language.<n>This framework employs a Vision-Language Model (VLM) to interpret abstract concepts in natural language instructions.<n>We show that our approach excels across challenging manipulation tasks involving language ambiguity, contact-rich manipulation, and multi-object interactions.
arXiv Detail & Related papers (2025-06-19T23:42:03Z) - DOLOMITES: Domain-Specific Long-Form Methodical Tasks [81.63464319950664]
We develop a typology of methodical tasks structured in the form of a task objective, procedure, input, and output.
We introduce DoLoMiTes, a novel benchmark with specifications for 519 such tasks elicited from hundreds of experts from across 25 fields.
Our benchmark further contains specific instantiations of methodical tasks with concrete input and output examples.
arXiv Detail & Related papers (2024-05-09T17:25:31Z) - How You Prompt Matters! Even Task-Oriented Constraints in Instructions Affect LLM-Generated Text Detection [39.254432080406346]
Even task-oriented constraints -- constraints that would naturally be included in an instruction and are not related to detection-evasion -- cause existing powerful detectors to have a large variance in detection performance.
Our experiments show that the standard deviation (SD) of current detector performance on texts generated by an instruction with such a constraint is significantly larger (up to an SD of 14.4 F1-score) than that by generating texts multiple times or paraphrasing the instruction.
arXiv Detail & Related papers (2023-11-14T18:32:52Z) - LLM Blueprint: Enabling Text-to-Image Generation with Complex and
Detailed Prompts [60.54912319612113]
Diffusion-based generative models have significantly advanced text-to-image generation but encounter challenges when processing lengthy and intricate text prompts.
We present a novel approach leveraging Large Language Models (LLMs) to extract critical components from text prompts.
Our evaluation on complex prompts featuring multiple objects demonstrates a substantial improvement in recall compared to baseline diffusion models.
arXiv Detail & Related papers (2023-10-16T17:57:37Z) - FacTool: Factuality Detection in Generative AI -- A Tool Augmented
Framework for Multi-Task and Multi-Domain Scenarios [87.12753459582116]
A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models.
We propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models.
arXiv Detail & Related papers (2023-07-25T14:20:51Z) - ART: Automatic multi-step reasoning and tool-use for large language
models [105.57550426609396]
Large language models (LLMs) can perform complex reasoning in few- and zero-shot settings.
Each reasoning step can rely on external tools to support computation beyond the core LLM capabilities.
We introduce Automatic Reasoning and Tool-use (ART), a framework that uses frozen LLMs to automatically generate intermediate reasoning steps as a program.
arXiv Detail & Related papers (2023-03-16T01:04:45Z) - InstructionNER: A Multi-Task Instruction-Based Generative Framework for
Few-shot NER [31.32381919473188]
We propose a multi-task instruction-based generative framework, named InstructionNER, for low-resource named entity recognition.
Specifically, we reformulate the NER task as a generation problem, which enriches source sentences with task-specific instructions and answer options, then inferences the entities and types in natural language.
Experimental results show that our method consistently outperforms other baselines on five datasets in few-shot settings.
arXiv Detail & Related papers (2022-03-08T07:56:36Z) - XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation [80.18830380517753]
We develop a new task-agnostic distillation framework XtremeDistilTransformers.
We study the transferability of several source tasks, augmentation resources and model architecture for distillation.
arXiv Detail & Related papers (2021-06-08T17:49:33Z) - Generating Instructions at Different Levels of Abstraction [61.70390291746106]
We show how to generate building instructions at different levels of abstraction in Minecraft.
A crowdsourcing evaluation shows that the choice of abstraction level matters to users.
arXiv Detail & Related papers (2020-10-08T13:56:09Z) - A Framework for Generating Diverse Haskell-IO Exercise Tasks [0.0]
We present the design of a framework to automatically generate a range of different exercise tasks on Haskell-I/O programming.
Together with an automated assessment system automatic task generation allows students to practice with as many exercise tasks as needed.
Our task generation is centered around a specification language for I/O behavior that we developed in an earlier work.
arXiv Detail & Related papers (2020-08-28T17:16:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.