TaskCraft: Automated Generation of Agentic Tasks
- URL: http://arxiv.org/abs/2506.10055v2
- Date: Tue, 17 Jun 2025 15:19:26 GMT
- Title: TaskCraft: Automated Generation of Agentic Tasks
- Authors: Dingfeng Shi, Jingyi Cao, Qianben Chen, Weichen Sun, Weizhen Li, Hongxuan Lu, Fangchen Dong, Tianrui Qin, King Zhu, Minghao Liu, Jian Yang, Ge Zhang, Jiaheng Liu, Changwang Zhang, Jun Wang, Yuchen Eleanor Jiang, Wangchunshu Zhou,
- Abstract summary: Agentic tasks require multi-step problem solving with autonomy, tool use, and adaptive reasoning.<n>We introduce textscCraftTask, an automated workflow for generating difficulty-scalable, multi-tool, and verifiable agentic tasks.<n>We present a large-scale synthetic dataset of approximately 36,000 tasks with varying difficulty to support future research on agent tuning and evaluation.
- Score: 39.33785092294476
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Agentic tasks, which require multi-step problem solving with autonomy, tool use, and adaptive reasoning, are becoming increasingly central to the advancement of NLP and AI. However, existing instruction data lacks tool interaction, and current agentic benchmarks rely on costly human annotation, limiting their scalability. We introduce \textsc{TaskCraft}, an automated workflow for generating difficulty-scalable, multi-tool, and verifiable agentic tasks with execution trajectories. TaskCraft expands atomic tasks using depth-based and width-based extensions to create structurally and hierarchically complex challenges. Empirical results show that these tasks improve prompt optimization in the generation workflow and enhance supervised fine-tuning of agentic foundation models. We present a large-scale synthetic dataset of approximately 36,000 tasks with varying difficulty to support future research on agent tuning and evaluation.
Related papers
- The Cost of Dynamic Reasoning: Demystifying AI Agents and Test-Time Scaling from an AI Infrastructure Perspective [3.0868637098088403]
Large-language-model (LLM)-based AI agents have recently showcased impressive versatility by employing dynamic reasoning.<n>This paper presents the first comprehensive system-level analysis of AI agents, quantifying their resource usage, latency behavior, energy consumption, and test-time scaling strategies.<n>Our findings reveal that while agents improve accuracy with increased compute, they suffer from rapidly diminishing returns, widening latency variance, and unsustainable infrastructure costs.
arXiv Detail & Related papers (2025-06-04T14:37:54Z) - LAM SIMULATOR: Advancing Data Generation for Large Action Model Training via Online Exploration and Trajectory Feedback [121.78866929908871]
Large Action Models (LAMs) for AI Agents offer incredible potential but face challenges due to the need for high-quality training data.<n>We present LAM SIMULATOR, a comprehensive framework designed for online exploration of agentic tasks with high-quality feedback.<n>Our framework features a dynamic task query generator, an extensive collection of tools, and an interactive environment where Large Language Model (LLM) Agents can call tools and receive real-time feedback.
arXiv Detail & Related papers (2025-06-02T22:36:02Z) - SEW: Self-Evolving Agentic Workflows for Automated Code Generation [24.16770109875788]
We propose textbfSelf-textbfEvolving textbfWork (textbfSEW), a novel framework that automatically generates and optimises multi-agentflow.<n>Our SEW can automatically design agentic and optimise them through self-evolution, bringing up to 33% improvement on LiveCodeBench.
arXiv Detail & Related papers (2025-05-24T11:12:14Z) - Interactive Agents to Overcome Ambiguity in Software Engineering [61.40183840499932]
AI agents are increasingly being deployed to automate tasks, often based on ambiguous and underspecified user instructions.<n>Making unwarranted assumptions and failing to ask clarifying questions can lead to suboptimal outcomes.<n>We study the ability of LLM agents to handle ambiguous instructions in interactive code generation settings by evaluating proprietary and open-weight models on their performance.
arXiv Detail & Related papers (2025-02-18T17:12:26Z) - AgentPS: Agentic Process Supervision for Multi-modal Content Quality Assurance through Multi-round QA [9.450927573476822]
textitAgentPS is a novel framework that integrates Agentic Process Supervision into MLLMs via multi-round question answering during fine-tuning.<n>textitAgentPS demonstrates significant performance improvements over baseline MLLMs on proprietary TikTok datasets.
arXiv Detail & Related papers (2024-12-15T04:58:00Z) - TDAG: A Multi-Agent Framework based on Dynamic Task Decomposition and Agent Generation [41.21899915378596]
We propose a multi-agent framework based on dynamic Task Decomposition and Agent Generation (TDAG)<n>This framework dynamically decomposes complex tasks into smaller subtasks and assigns each to a specifically generated subagent.<n>ItineraryBench is designed to assess agents' abilities in memory, planning, and tool usage across tasks of varying complexity.
arXiv Detail & Related papers (2024-02-15T18:27:37Z) - TaskBench: Benchmarking Large Language Models for Task Automation [82.2932794189585]
We introduce TaskBench, a framework to evaluate the capability of large language models (LLMs) in task automation.
Specifically, task decomposition, tool selection, and parameter prediction are assessed.
Our approach combines automated construction with rigorous human verification, ensuring high consistency with human evaluation.
arXiv Detail & Related papers (2023-11-30T18:02:44Z) - Evaluating the Capabilities of Multi-modal Reasoning Models with
Synthetic Task Data [0.0]
We leverage advances in high resolution text-to-image generation to develop a framework for generating evaluation data for multi-modal reasoning tasks.
We apply this framework to generate context-dependent anomaly data, creating a synthetic dataset on a challenging task.
We demonstrate that while the task is tractable, the model performs significantly worse on the context-dependent anomaly detection task than on standard VQA tasks.
arXiv Detail & Related papers (2023-06-01T20:56:34Z) - Exposing and Addressing Cross-Task Inconsistency in Unified
Vision-Language Models [80.23791222509644]
Inconsistent AI models are considered brittle and untrustworthy by human users.
We find that state-of-the-art vision-language models suffer from a surprisingly high degree of inconsistent behavior across tasks.
We propose a rank correlation-based auxiliary training objective, computed over large automatically created cross-task contrast sets.
arXiv Detail & Related papers (2023-03-28T16:57:12Z) - ART: Automatic multi-step reasoning and tool-use for large language
models [105.57550426609396]
Large language models (LLMs) can perform complex reasoning in few- and zero-shot settings.
Each reasoning step can rely on external tools to support computation beyond the core LLM capabilities.
We introduce Automatic Reasoning and Tool-use (ART), a framework that uses frozen LLMs to automatically generate intermediate reasoning steps as a program.
arXiv Detail & Related papers (2023-03-16T01:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.