A Framework for Generating Diverse Haskell-IO Exercise Tasks
- URL: http://arxiv.org/abs/2008.12751v2
- Date: Mon, 24 Jul 2023 09:26:10 GMT
- Title: A Framework for Generating Diverse Haskell-IO Exercise Tasks
- Authors: Oliver Westphal
- Abstract summary: We present the design of a framework to automatically generate a range of different exercise tasks on Haskell-I/O programming.
Together with an automated assessment system automatic task generation allows students to practice with as many exercise tasks as needed.
Our task generation is centered around a specification language for I/O behavior that we developed in an earlier work.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the design of a framework to automatically generate a large range
of different exercise tasks on Haskell-I/O programming. Automatic task
generation is useful in many different ways. Manual task creating is a time
consuming process, so automating it saves valuable time for the educator.
Together with an automated assessment system automatic task generation allows
students to practice with as many exercise tasks as needed. Additionally, each
student can be given a slightly different version of a task, reducing issues
regarding plagiarism that arise naturally in an e-learning environment. Our
task generation is centered around a specification language for I/O behavior
that we developed in an earlier work. The task generation framework, an EDSL in
Haskell, provides powerful primitives for the creation of various artifacts,
including program code, from specifications. We will not go into detail on the
technical realization of these primitives. This article instead showcases how
such artifacts and the framework as a whole can be used to build exercise tasks
templates that can then be (randomly) instantiated.
Related papers
- Intent Preserving Generation of Diverse and Idiomatic (Code-)Artifacts [0.0]
We present an approach where instead of writing monolithic generators for multiple connected artifacts one specifies a small set of abstract building blocks.<n>The intended structure of the resulting artifacts is specified as a composition of the small abstract building blocks.
arXiv Detail & Related papers (2025-08-05T16:54:15Z) - Is Visual in-Context Learning for Compositional Medical Tasks within Reach? [68.56630652862293]
In this paper, we explore the potential of visual in-context learning to enable a single model to handle multiple tasks.<n>We introduce a novel method for training in-context learners using a synthetic compositional task generation engine.
arXiv Detail & Related papers (2025-07-01T15:32:23Z) - Synthesizing High-Quality Programming Tasks with LLM-based Expert and Student Agents [26.884829816265174]
PyTaskSyn is a novel synthesis technique that first generates a programming task and then decides whether it meets certain quality criteria to be given to students.
We show that PyTaskSyn significantly improves task quality compared to baseline techniques and showcases the importance of each specialized agent type in our validation pipeline.
arXiv Detail & Related papers (2025-04-10T11:08:39Z) - TaskBench: Benchmarking Large Language Models for Task Automation [82.2932794189585]
We introduce TaskBench, a framework to evaluate the capability of large language models (LLMs) in task automation.
Specifically, task decomposition, tool selection, and parameter prediction are assessed.
Our approach combines automated construction with rigorous human verification, ensuring high consistency with human evaluation.
arXiv Detail & Related papers (2023-11-30T18:02:44Z) - Fully Automated Task Management for Generation, Execution, and
Evaluation: A Framework for Fetch-and-Carry Tasks with Natural Language
Instructions in Continuous Space [1.2691047660244337]
This paper aims to develop a framework that enables a robot to execute tasks based on visual information.
We propose a framework for the full automation of the generation, execution, and evaluation of FCOG tasks.
In addition, we introduce an approach to solving the FCOG tasks by dividing them into four distinct subtasks.
arXiv Detail & Related papers (2023-11-07T15:38:09Z) - Generalizable Long-Horizon Manipulations with Large Language Models [91.740084601715]
This work introduces a framework harnessing the capabilities of Large Language Models (LLMs) to generate primitive task conditions for generalizable long-horizon manipulations.
We create a challenging robotic manipulation task suite based on Pybullet for long-horizon task evaluation.
arXiv Detail & Related papers (2023-10-03T17:59:46Z) - LARG, Language-based Automatic Reward and Goal Generation [8.404316955848602]
We develop an approach that converts a text-based task description into its corresponding reward and goal-generation functions.
We evaluate our approach for robotic manipulation and demonstrate its ability to train and execute policies in a scalable manner.
arXiv Detail & Related papers (2023-06-19T14:52:39Z) - ART: Automatic multi-step reasoning and tool-use for large language
models [105.57550426609396]
Large language models (LLMs) can perform complex reasoning in few- and zero-shot settings.
Each reasoning step can rely on external tools to support computation beyond the core LLM capabilities.
We introduce Automatic Reasoning and Tool-use (ART), a framework that uses frozen LLMs to automatically generate intermediate reasoning steps as a program.
arXiv Detail & Related papers (2023-03-16T01:04:45Z) - Analysis and Prediction of NLP Models Via Task Embeddings [25.311690222754454]
We propose MetaEval, a collection of $101$ NLP tasks.
We fit a single transformer to all MetaEval tasks jointly while conditioning it on learned embeddings.
The resulting task embeddings enable a novel analysis of the space of tasks.
arXiv Detail & Related papers (2021-12-10T16:23:24Z) - Unified Multimodal Pre-training and Prompt-based Tuning for
Vision-Language Understanding and Generation [86.26522210882699]
We propose Unified multimodal pre-training for both Vision-Language understanding and generation.
The proposed UniVL is capable of handling both understanding tasks and generative tasks.
Our experiments show that there is a trade-off between understanding tasks and generation tasks while using the same model.
arXiv Detail & Related papers (2021-12-10T14:59:06Z) - XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation [80.18830380517753]
We develop a new task-agnostic distillation framework XtremeDistilTransformers.
We study the transferability of several source tasks, augmentation resources and model architecture for distillation.
arXiv Detail & Related papers (2021-06-08T17:49:33Z) - SOLOIST: Building Task Bots at Scale with Transfer Learning and Machine
Teaching [81.45928589522032]
We parameterize modular task-oriented dialog systems using a Transformer-based auto-regressive language model.
We pre-train, on heterogeneous dialog corpora, a task-grounded response generation model.
Experiments show that SOLOIST creates new state-of-the-art on well-studied task-oriented dialog benchmarks.
arXiv Detail & Related papers (2020-05-11T17:58:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.