DIDUP: Dynamic Iterative Development for UI Prototyping
- URL: http://arxiv.org/abs/2407.08474v1
- Date: Thu, 11 Jul 2024 13:10:11 GMT
- Title: DIDUP: Dynamic Iterative Development for UI Prototyping
- Authors: Jenny Ma, Karthik Sreedhar, Vivian Liu, Sitong Wang, Pedro Alejandro Perez, Lydia B. Chilton,
- Abstract summary: We conduct a formative study of GPT Pilot, a leading LLM-generated code-prototyping system.
We find that its inflexibility towards change once development has started leads to weaknesses in failure prevention and dynamic planning.
We introduce DIDUP, a system for code-based UI prototyping that follows an iterative spiral model.
- Score: 19.486064197590995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) are remarkably good at writing code. A particularly valuable case of human-LLM collaboration is code-based UI prototyping, a method for creating interactive prototypes that allows users to view and fully engage with a user interface. We conduct a formative study of GPT Pilot, a leading LLM-generated code-prototyping system, and find that its inflexibility towards change once development has started leads to weaknesses in failure prevention and dynamic planning; it closely resembles the linear workflow of the waterfall model. We introduce DIDUP, a system for code-based UI prototyping that follows an iterative spiral model, which takes changes and iterations that come up during the development process into account. We propose three novel mechanisms for LLM-generated code-prototyping systems: (1) adaptive planning, where plans should be dynamic and reflect changes during implementation, (2) code injection, where the system should write a minimal amount of code and inject it instead of rewriting code so users have a better mental model of the code evolution, and (3) lightweight state management, a simplified version of source control so users can quickly revert to different working states. Together, this enables users to rapidly develop and iterate on prototypes.
Related papers
- ScreenCoder: Advancing Visual-to-Code Generation for Front-End Automation via Modular Multimodal Agents [35.10813247827737]
We introduce a modular multi-agent framework that performs user interface-to-code generation in three interpretable stages.<n>The framework improves robustness, interpretability, and fidelity over end-to-end black-box methods.<n>Our approach achieves state-of-the-art performance in layout accuracy, structural coherence, and code correctness.
arXiv Detail & Related papers (2025-07-30T16:41:21Z) - Type-Constrained Code Generation with Language Models [51.03439021895432]
We introduce a type-constrained decoding approach that leverages type systems to guide code generation.<n>For this purpose, we develop novel prefix automata and a search over inhabitable types, forming a sound approach to enforce well-typedness on LLM-generated code.<n>Our approach reduces compilation errors by more than half and significantly increases functional correctness in code synthesis, translation, and repair tasks.
arXiv Detail & Related papers (2025-04-12T15:03:00Z) - GUIDE: LLM-Driven GUI Generation Decomposition for Automated Prototyping [55.762798168494726]
Large Language Models (LLMs) with their impressive code generation capabilities offer a promising approach for automating GUI prototyping.
But there is a gap between current LLM-based prototyping solutions and traditional user-based GUI prototyping approaches.
We propose GUIDE, a novel LLM-driven GUI generation decomposition approach seamlessly integrated into the popular prototyping framework Figma.
arXiv Detail & Related papers (2025-02-28T14:03:53Z) - LLM as a code generator in Agile Model Driven Development [1.12646803578849]
This research champions Model Driven Development (MDD) as a viable strategy to overcome these challenges.
We propose an Agile Model Driven Development (AMDD) approach that employs GPT4 as a code generator.
Applying GPT4 auto generation capabilities yields Java and Python code that is compatible with the JADE and PADE frameworks.
arXiv Detail & Related papers (2024-10-24T07:24:11Z) - A Pair Programming Framework for Code Generation via Multi-Plan Exploration and Feedback-Driven Refinement [24.25119206488625]
PairCoder is a novel framework for large language models (LLMs) to generate code.
It incorporates two collaborative agents, namely a Navigator agent for high-level planning and a Driver agent for specific implementation.
The Driver follows the guidance of Navigator to undertake initial code generation, code testing, and refinement.
arXiv Detail & Related papers (2024-09-08T07:22:19Z) - Improved Diversity-Promoting Collaborative Metric Learning for Recommendation [127.08043409083687]
Collaborative Metric Learning (CML) has recently emerged as a popular method in recommendation systems.
This paper focuses on a challenging scenario where a user has multiple categories of interests.
We propose a novel method called textitDiversity-Promoting Collaborative Metric Learning (DPCML)
arXiv Detail & Related papers (2024-09-02T07:44:48Z) - Prototype2Code: End-to-end Front-end Code Generation from UI Design Prototypes [13.005027924553012]
We introduce Prototype2Code, which achieves end-to-end front-end code generation with business demands.
For Prototype2Code, we incorporate design linting into the workflow, addressing the detection of fragmented elements and perceptual groups.
By optimizing the hierarchical structure and intelligently recognizing UI element types, Prototype2Code generates code that is more readable and structurally clearer.
arXiv Detail & Related papers (2024-05-08T11:32:50Z) - SOEN-101: Code Generation by Emulating Software Process Models Using Large Language Model Agents [50.82665351100067]
FlowGen is a code generation framework that emulates software process models based on multiple Large Language Model (LLM) agents.
We evaluate FlowGenScrum on four benchmarks: HumanEval, HumanEval-ET, MBPP, and MBPP-ET.
arXiv Detail & Related papers (2024-03-23T14:04:48Z) - Design2Code: Benchmarking Multimodal Code Generation for Automated Front-End Engineering [74.99736967448423]
We construct Design2Code - the first real-world benchmark for this task.
We manually curate 484 diverse real-world webpages as test cases and develop a set of automatic evaluation metrics.
Our fine-grained break-down metrics indicate that models mostly lag in recalling visual elements from the input webpages and generating correct layout designs.
arXiv Detail & Related papers (2024-03-05T17:56:27Z) - CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules [51.82044734879657]
We propose CodeChain, a novel framework for inference that elicits modularized code generation through a chain of self-revisions.
We find that CodeChain can significantly boost both modularity as well as correctness of the generated solutions, achieving relative pass@1 improvements of 35% on APPS and 76% on CodeContests.
arXiv Detail & Related papers (2023-10-13T10:17:48Z) - Coding by Design: GPT-4 empowers Agile Model Driven Development [0.03683202928838613]
This research offers an Agile Model-Driven Development (MDD) approach that enhances code auto-generation using OpenAI's GPT-4.
Our work emphasizes "Agility" as a significant contribution to the current MDD method, particularly when the model undergoes changes or needs deployment in a different programming language.
Ultimately, leveraging GPT-4, our last layer auto-generates code in both Java and Python.
arXiv Detail & Related papers (2023-10-06T15:05:05Z) - SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex
Interactive Tasks [81.9962823875981]
We introduce SwiftSage, a novel agent framework inspired by the dual-process theory of human cognition.
The framework comprises two primary modules: the Swift module, representing fast and intuitive thinking, and the Sage module, emulating deliberate thought processes.
In 30 tasks from the ScienceWorld benchmark, SwiftSage significantly outperforms other methods such as SayCan, ReAct, and Reflex.
arXiv Detail & Related papers (2023-05-27T07:04:15Z) - Low-code LLM: Graphical User Interface over Large Language Models [115.08718239772107]
This paper introduces a novel human-LLM interaction framework, Low-code LLM.
It incorporates six types of simple low-code visual programming interactions to achieve more controllable and stable responses.
We highlight three advantages of the low-code LLM: user-friendly interaction, controllable generation, and wide applicability.
arXiv Detail & Related papers (2023-04-17T09:27:40Z) - Multimodal Prototype-Enhanced Network for Few-Shot Action Recognition [40.329190454146996]
MultimOdal PRototype-ENhanced Network (MORN) uses semantic information of label texts as multimodal information to enhance prototypes.
We conduct extensive experiments on four popular few-shot action recognition datasets.
arXiv Detail & Related papers (2022-12-09T14:24:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.