Self-Elicitation of Requirements with Automated GUI Prototyping
- URL: http://arxiv.org/abs/2409.16388v1
- Date: Tue, 24 Sep 2024 18:40:38 GMT
- Title: Self-Elicitation of Requirements with Automated GUI Prototyping
- Authors: Kristian Kolthoff, Christian Bartelt, Simone Paolo Ponzetto, Kurt Schneider,
- Abstract summary: SERGUI is a novel approach enabling the Self-Elicitation of Requirements based on an automated GUI prototyping assistant.
SerGUI exploits the vast prototyping knowledge embodied in a large-scale GUI repository through Natural Language Requirements (NLR) based GUI retrieval.
To measure the effectiveness of our approach, we conducted a preliminary evaluation.
- Score: 12.281152349482024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Requirements Elicitation (RE) is a crucial activity especially in the early stages of software development. GUI prototyping has widely been adopted as one of the most effective RE techniques for user-facing software systems. However, GUI prototyping requires (i) the availability of experienced requirements analysts, (ii) typically necessitates conducting multiple joint sessions with customers and (iii) creates considerable manual effort. In this work, we propose SERGUI, a novel approach enabling the Self-Elicitation of Requirements (SER) based on an automated GUI prototyping assistant. SERGUI exploits the vast prototyping knowledge embodied in a large-scale GUI repository through Natural Language Requirements (NLR) based GUI retrieval and facilitates fast feedback through GUI prototypes. The GUI retrieval approach is closely integrated with a Large Language Model (LLM) driving the prompting-based recommendation of GUI features for the current GUI prototyping context and thus stimulating the elicitation of additional requirements. We envision SERGUI to be employed in the initial RE phase, creating an initial GUI prototype specification to be used by the analyst as a means for communicating the requirements. To measure the effectiveness of our approach, we conducted a preliminary evaluation. Video presentation of SERGUI at: https://youtu.be/pzAAB9Uht80
Related papers
- GUI Agents: A Survey [129.94551809688377]
Graphical User Interface (GUI) agents, powered by Large Foundation Models, have emerged as a transformative approach to automating human-computer interaction.
Motivated by the growing interest and fundamental importance of GUI agents, we provide a comprehensive survey that categorizes their benchmarks, evaluation metrics, architectures, and training methods.
arXiv Detail & Related papers (2024-12-18T04:48:28Z) - Zero-Shot Prompting Approaches for LLM-based Graphical User Interface Generation [53.1000575179389]
We propose a Retrieval-Augmented GUI Generation (RAGG) approach, integrated with an LLM-based GUI retrieval re-ranking and filtering mechanism.
In addition, we adapt Prompt Decomposition (PDGG) and Self-Critique (SCGG) for GUI generation.
Our evaluation, which encompasses over 3,000 GUI annotations from over 100 crowd-workers with UI/UX experience, shows that SCGG, in contrast to PDGG and RAGG, can lead to more effective GUI generation.
arXiv Detail & Related papers (2024-12-15T22:17:30Z) - Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction [69.57190742976091]
We introduce Aguvis, a unified vision-based framework for autonomous GUI agents.
Our approach leverages image-based observations, and grounding instructions in natural language to visual elements.
To address the limitations of previous work, we integrate explicit planning and reasoning within the model.
arXiv Detail & Related papers (2024-12-05T18:58:26Z) - Large Language Model-Brained GUI Agents: A Survey [42.82362907348966]
multimodal models have ushered in a new era of GUI automation.
They have demonstrated exceptional capabilities in natural language understanding, code generation, and visual processing.
These agents represent a paradigm shift, enabling users to perform intricate, multi-step tasks through simple conversational commands.
arXiv Detail & Related papers (2024-11-27T12:13:39Z) - GUICourse: From General Vision Language Models to Versatile GUI Agents [75.5150601913659]
We contribute GUICourse, a suite of datasets to train visual-based GUI agents.
First, we introduce the GUIEnv dataset to strengthen the OCR and grounding capabilities of VLMs.
Then, we introduce the GUIAct and GUIChat datasets to enrich their knowledge of GUI components and interactions.
arXiv Detail & Related papers (2024-06-17T08:30:55Z) - GUI-WORLD: A Dataset for GUI-oriented Multimodal LLM-based Agents [73.9254861755974]
This paper introduces a new dataset, called GUI-World, which features meticulously crafted Human-MLLM annotations.
We evaluate the capabilities of current state-of-the-art MLLMs, including ImageLLMs and VideoLLMs, in understanding various types of GUI content.
arXiv Detail & Related papers (2024-06-16T06:56:53Z) - Interlinking User Stories and GUI Prototyping: A Semi-Automatic LLM-based Approach [55.762798168494726]
We present a novel Large Language Model (LLM)-based approach for validating the implementation of functional NL-based requirements in a graphical user interface (GUI) prototype.
Our approach aims to detect functional user stories that are not implemented in a GUI prototype and provides recommendations for suitable GUI components directly implementing the requirements.
arXiv Detail & Related papers (2024-06-12T11:59:26Z) - Boosting GUI Prototyping with Diffusion Models [0.440401067183266]
Deep learning models such as Stable Diffusion have emerged as a powerful text-to-image tool.
We propose UI-Diffuser, an approach that leverages Stable Diffusion to generate mobile UIs.
Preliminary results show that UI-Diffuser provides an efficient and cost-effective way to generate mobile GUI designs.
arXiv Detail & Related papers (2023-06-09T20:08:46Z) - Applied Awareness: Test-Driven GUI Development using Computer Vision and
Cryptography [0.0]
Test-driven development is impractical: it generally requires an initial implementation of the GUI to generate golden images or to construct interactive test scenarios.
We demonstrate a novel and immediately applicable approach of interpreting GUI presentation in terms of backend communications.
This focus on backend communication circumvents deficiencies in typical testing methodologies that rely on platform-dependent UI affordances or accessibility features.
arXiv Detail & Related papers (2020-06-05T22:46:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.