Automatically Generating Web Applications from Requirements Via Multi-Agent Test-Driven Development
- URL: http://arxiv.org/abs/2509.25297v2
- Date: Wed, 01 Oct 2025 17:32:51 GMT
- Title: Automatically Generating Web Applications from Requirements Via Multi-Agent Test-Driven Development
- Authors: Yuxuan Wan, Tingshuo Liang, Jiakai Xu, Jingyu Xiao, Yintong Huo, Michael R. Lyu,
- Abstract summary: We introduce TDDev, the first test-driven development framework for end-to-end full-stack web application generation.<n>Given a natural language description or design image, TDDev automatically derives executable test cases, generates front-end and back-end code, and simulates user interactions.<n>Our framework addresses key challenges in full-stack automation, including underspecified user requirements, complex interdependencies among multiple files, and the need for both functional correctness and visual fidelity.
- Score: 34.560333810255464
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Developing full-stack web applications is complex and time-intensive, demanding proficiency across diverse technologies and frameworks. Although recent advances in multimodal large language models (MLLMs) enable automated webpage generation from visual inputs, current solutions remain limited to front-end tasks and fail to deliver fully functional applications. In this work, we introduce TDDev, the first test-driven development (TDD)-enabled LLM-agent framework for end-to-end full-stack web application generation. Given a natural language description or design image, TDDev automatically derives executable test cases, generates front-end and back-end code, simulates user interactions, and iteratively refines the implementation until all requirements are satisfied. Our framework addresses key challenges in full-stack automation, including underspecified user requirements, complex interdependencies among multiple files, and the need for both functional correctness and visual fidelity. Through extensive experiments on diverse application scenarios, TDDev achieves a 14.4% improvement on overall accuracy compared to state-of-the-art baselines, demonstrating its effectiveness in producing reliable, high-quality web applications without requiring manual intervention.
Related papers
- FronTalk: Benchmarking Front-End Development as Conversational Code Generation with Multi-Modal Feedback [92.67587639164908]
We present FronTalk, a benchmark for front-end code generation with multi-modal feedback.<n>We focus on the front-end development task and curate FronTalk, a collection of 100 multi-turn dialogues.<n> Evaluation of 20 models reveals two key challenges that are under-explored systematically in the literature.
arXiv Detail & Related papers (2025-12-05T23:28:09Z) - AppForge: From Assistant to Independent Developer - Are GPTs Ready for Software Development? [28.63033734662797]
APPFORGE is a benchmark consisting of 101 software development problems drawn from real-world Android apps.<n>We design a multi-agent system to automatically summarize the main functionalities from app documents and navigate the app to synthesize test cases.<n>Following rigorous manual verification by Android development experts, APPFORGE incorporates the test cases within an automated evaluation framework.
arXiv Detail & Related papers (2025-10-09T03:26:05Z) - IWR-Bench: Can LVLMs reconstruct interactive webpage from a user interaction video? [56.33950760097989]
IWR-Bench is a novel benchmark for evaluating the capabilities of Large Vision-Language Models (LVLMs) in interactive webpage reconstruction from video.<n>IWR-Bench comprises 113 meticulously curated tasks from 100 real-world websites, with 1,001 actions.<n>This benchmark evaluates models on two fundamental challenges: comprehensive multi-modal reasoning to infer interaction logic from video and assets, and advanced code generation to translate this logic into functional code.
arXiv Detail & Related papers (2025-09-29T12:38:06Z) - WebUIBench: A Comprehensive Benchmark for Evaluating Multimodal Large Language Models in WebUI-to-Code [57.45181837786448]
Multimodal Large Language Models (MLLMs) have the potential to act as AI software engineers capable of executing complex web application development.<n>Existing benchmarks usually fail to provide an assessment of sub-capabilities and focus solely on webpage generation outcomes.<n>We propose WebUIBench, a benchmark systematically designed to evaluate MLLMs in four key areas: WebUI Perception, HTML Programming,WebUI-HTML Understanding, and WebUI-to-Code.
arXiv Detail & Related papers (2025-06-09T14:46:02Z) - Automated Web Application Testing: End-to-End Test Case Generation with Large Language Models and Screen Transition Graphs [0.5965410190046627]
This paper presents an automated system for generating test cases for two key aspects of web application testing: site navigation and form filling.<n>For site navigation, the system employs screen transition graphs and LLMs to model navigation flows and generate test scenarios.<n>For form filling, it uses state graphs to handle conditional forms and automates Selenium script generation.
arXiv Detail & Related papers (2025-06-03T07:08:21Z) - FormFactory: An Interactive Benchmarking Suite for Multimodal Form-Filling Agents [36.11725924594441]
Current online form filling tools are largely rule-based and lack generalizable, generative capabilities.<n>We propose FormFactory, an interactive benchmarking suite comprising a web-based interface, backend evaluation module, and dataset.<n>Our benchmark covers diverse real-world scenarios, incorporates various field formats, and simulates high-fidelity form interactions.
arXiv Detail & Related papers (2025-06-02T10:34:57Z) - AppAgent v2: Advanced Agent for Flexible Mobile Interactions [57.98933460388985]
This work introduces a novel LLM-based multimodal agent framework for mobile devices.<n>Our agent constructs a flexible action space that enhances adaptability across various applications.<n>Our results demonstrate the framework's superior performance, confirming its effectiveness in real-world scenarios.
arXiv Detail & Related papers (2024-08-05T06:31:39Z) - On the Multi-turn Instruction Following for Conversational Web Agents [83.51251174629084]
We introduce a new task of Conversational Web Navigation, which necessitates sophisticated interactions that span multiple turns with both the users and the environment.
We propose a novel framework, named self-reflective memory-augmented planning (Self-MAP), which employs memory utilization and self-reflection techniques.
arXiv Detail & Related papers (2024-02-23T02:18:12Z) - LLM for Test Script Generation and Migration: Challenges, Capabilities,
and Opportunities [8.504639288314063]
Test script generation is a vital component of software testing, enabling efficient and reliable automation of repetitive test tasks.
Existing generation approaches often encounter limitations, such as difficulties in accurately capturing and reproducing test scripts across diverse devices, platforms, and applications.
This paper investigates the application of large language models (LLM) in the domain of mobile application test script generation.
arXiv Detail & Related papers (2023-09-24T07:58:57Z) - ChatDev: Communicative Agents for Software Development [84.90400377131962]
ChatDev is a chat-powered software development framework in which specialized agents are guided in what to communicate.
These agents actively contribute to the design, coding, and testing phases through unified language-based communication.
arXiv Detail & Related papers (2023-07-16T02:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.